A bit more context there since you might wonder why customers can cause Sev1’s.

Well, I work for a Database Technology company and we provide a managed service offering. This managed service offering has SLA’s that essentially enforce a 5 minute response time for any “urgent” issue.

Well, a common urgent issue is that the customer suddenly wants to load in a bunch of new data without informing us which causes the cluster to stop accepting write loads.

It’s to the point where most if not all urgent pages result in some form of scaling of the cluster.

Since this is a customer driven behavior, there is no real ability to plan for it - and since these particular customers have special requirements (and thus, less ability to automate scaling operations), I’m unsure if there is any recourse here.

It’s to the point that it doesn’t even feel like an SRE team anymore - we should just instead be called “On-demand scaling agents”. Since we’re constantly trying to scale ahead of our customers.

All in all, I’m starting to feel like this is a management/sales level issue that I cannot possibly address. If we’re selling this managed service offering as essentially “magic” that can be scaled whenever they need then it seems like we’re being setup for failure at the organizational level. Not to mention, not being smart about costs behind scaling and factoring that into these contracts.

So, fellow SRE’s have you had to have this conversation with a larger org? What works for something like this? What doesn’t? Should I just seek greener pastures at this point?

P.S. - Posted c/Programming due to lack of a c/SRE

  • th3raid0rOPA
    34 months ago

    Probably not feasible in our case. We sell our DB tech based on the sheer IOPS it’s capable of. It already alerts the user if the write-cache is full or the replication cache is backing up too.

    The problem is, at full tilt, a 9 node cluster can take on over 1GB/s in new data. This is fine if the customer is writing over old records and doesn’t require any new space. It’s just that it’s more common that Mr. customer added a new microservice and didn’t think through how much data it requires. Thus causing rapid increase in DB disk space or IOPs that the cluster wasn’t sized for.

    We do have another product line in the works (we call it DBaaS) and that can autoscale because it’s based on clearly defined service levels and cluster specifications. I don’t think that product will have this problem.

    It’s just these super mega special (read: big, important, fortune 100) companies have requirements that mean they need something more hand-crafted. Otherwise we’d have automated the toil by now.

    • snoweM
      84 months ago

      As soon as you go down the path of customization for “special clients” you’ve already lost the battle. Business needs to agree to not sell something like that. I’m not being helpful here, but as soon as you’ve started customizing like that to get massive clients it will never end and it will just slowly suffocate your company.

      • @[email protected]
        54 months ago

        When I was working in enterprise software, we had 2 ways of handling special customer requirements.

        The product manager would engage with the sales engineer to identify if this was part of a feature that other customers of similar size or industry might need.

        If so, design the feature for the broadest use cases and put in the development roadmap.

        If it’s highly specific to one customer, offer customization work on a contract basis and keep it as a separate code branch and environment.

      • @[email protected]
        24 months ago

        Yeah this sounds like more of an issue with how the company interacted with the clients and the expectations that are set.

        My comment also isn’t helpful, just saying the situation sucks when you’re the employee dealing with the situation.

        In my view, from some years in customer service and tech, you either need to develop a more robust system to prevent this behavior, or start slapping clients on the wrist for this behavior. Otherwise they will continue to walk all over your company. The c-levels don’t care because the customer is happy because shit gets done and they get paid. However, if a client runs into an issue due to their negligence and you’re not there immediately to fix it they either learn to prevent the issues themselves or switch to another service.

        There are points where you may need to grin and bear it, but it’s not sustainable as you mentioned.

        My favorite issue that’s been happening far too frequently is my company takes on a new client or a new request from an existing client without confirming that the software can do the request. And then right before their deadline (1-2 days typically) they go “oh this value isn’t what we expected” or “can we provide X to the client”.

        We sure can fix that, but it won’t magically happen in your expedited timeframe. Failure to plan on your end does not constitute an emergency on my end.

    • @[email protected]
      4 months ago

      How are they placing this data? Api? Not possible to align disk tiers to api requests per minute? Api response limited to every 1ms for some clients, 0.1ms rate for others?

      You’re pretty forthcoming about the problems so I do genuinely hope you get some talking points since this issue affects, app&db design, sales, and maintenance teams minimally. Considering all aspects will give you more chance for the business to realise there’s a problem that affects customer experience.

      I think from handling tickets, maybe processes to auto respond to rate limited/throttled customers with 'your instance been rate limited as it has reached the {tier limit} as per your performance tier. This limit is until {rate limit block time expiry}. Support tickets related to performance or limits will be limited to P3 until this rate limit expires."

      Work with your sales and contracts team to update the sla to exclude rate limited customers from priority sla.

      I guess I’m still on the “maybe there’s more you can do to get your feet out of the fire for customer self inflicted injury” like correctly classifying customer stuff right. It’s bad when one customer can misclassify stuff and harm another customer with an issue by jumping a queue and delaying response to real issues, when it’s working as intended.

      If a customer was warned and did it anyway, it can’t be a top priority issue, which is your argument I guess. Customers who need more, but pay for less and then have a expectation for more than they get. It’s really not your fault or problem. But if it’s affecting you I guess I’m wondering how to get it to affect you less.