Home / About
Our Story

We started OliveVPS because hosting was boring.

And expensive. And unreliable. And run by support bots that never read your ticket. We thought servers deserved better β€” so did the people who run them.

Founded 2024

Engineers, not marketers.

OliveVPS was founded by a team of infrastructure engineers who'd spent a decade running production workloads on every major cloud β€” and were tired of the same three problems: opaque pricing, slow support, and sluggish disks.

We built the host we wished existed. Transparent flat pricing. NVMe storage on every plan. Real humans answering tickets in minutes, not days. And β€” because life is too short for boring brands β€” a small obsession with cats.

😺
What we believe

Three principles. Nine lives.

πŸ’Ž

Honesty over hype

We publish real benchmarks, real uptime numbers, and real network maps. If we mess up, we say so β€” and credit your account before you ask.

⚑

Speed is a feature

NVMe everywhere. Dedicated cores. 10 Gbps networking standard on most plans. Slow infrastructure is a tax on every line of code you write.

🀝

People run servers

Every support ticket reaches a real engineer. No outsourced first-line, no "have you tried turning it off and on again." We've all been on your side of the screen.

By the numbers

Not bad for two years.

12,000+
Active Customers
42,000+
Servers Deployed
99.998%
2025 Uptime
7 min
Median Response

The OliveVPS origin story

Most hosting companies are founded by marketers. They identify a market, raise money, white-label someone else's infrastructure, and slap a logo on it. Six months later you're paying $50/mo for a server worth $10/mo, the support team is in a third country, and the "uptime guarantee" is a marketing line with no teeth.

OliveVPS was founded the other way around. The five of us spent a combined 50+ years running production infrastructure β€” at startups, at scale-ups, at one large hyperscaler β€” before we ever thought about starting a company. We knew exactly what we wanted in a hosting provider, because we knew exactly what every host we'd ever used had gotten wrong.

The first server we deployed was in Frankfurt, on a refurbished EPYC system we paid for with our own savings. The second was in New York. The third was in Tokyo. By the time we'd built out 10 regions, the company had a name (after one of the founders' cats, an exceptionally stubborn black-and-white tuxedo), a brand (cats, obviously), and our first 100 customers.

Why cats?

The honest answer is: we genuinely like cats. Three of them live in our main office and have unrestricted access to the keyboards, the snack drawer, and the standing desks. Their names are Olive (the company namesake), Pixel (calico, occasionally walks across the production keyboard), and Routher (orange, named after a typo nobody fixed).

The marketing answer is: cats are the right metaphor for good infrastructure. Quiet. Low-maintenance. Slightly aloof. Fast when they need to be, asleep otherwise. They land on their feet. They have nine lives. Show us a better animal for representing reliable hosting and we'll consider switching mascots.

We don't pretend the cats run the servers. They mostly sleep. But they remind us β€” every day β€” that good infrastructure should be quiet, low-maintenance, and slightly aloof. The best server is the one you forget about, until you need to email support, and then they answer in seven minutes.

Our philosophy on pricing

The hosting industry has perfected the art of complicated pricing. Hyperscalers charge per CPU-second, per GB-second, per network operation, per API call, per zone-transfer, per DNS query. By the time you understand your bill, the bill is bigger than you expected. By the time you optimize the bill, the pricing has changed.

We do flat monthly pricing. The price you see is the price you pay. Bandwidth overages are charged at $0.005/GB β€” roughly 18Γ— cheaper than AWS egress β€” and they're disclosed up front, not buried on a billing page. There are no zone-transfer fees, no inbound charges, no premium-region surcharges, and no "support tier" pricing.

We can do this because our infrastructure is straightforward: we own the hardware, we lease colocation space directly, and we run a tight engineering team. We're not optimizing for shareholder returns or growth-at-all-costs. We're optimizing for customers who stay with us for years.

Our philosophy on support

The reason most hosting support is bad is structural. Companies with thousands of customers can't afford to staff senior engineers on every shift, so they triage with junior agents using runbooks. The agents can't actually fix anything; they have to escalate. Escalations queue. By the time you reach someone empowered to help, hours or days have passed.

We made a deliberate decision to staff our support channel exclusively with engineers. Every person who answers a ticket can investigate the issue, diagnose it, and fix it themselves β€” without escalation. That's expensive. We pay engineering salaries to people who could be working on infrastructure instead. But the result is a 7-minute median response time and customers who actually like dealing with us.

What we're working on next

Our 2026 roadmap is publicly committed:

🌎

New regions

Mexico City (Q2), BogotΓ‘ & Johannesburg (Q3), Stockholm & Madrid (Q4)

☸️

Managed Kubernetes

Beta Q3, GA Q4 β€” automatic node pool management for VPS customers

πŸ“¦

Object storage

S3-compatible API everywhere, $0.005/GB retention + $0.005/GB egress

🌐

Floating IPs

Portable IPv4 you can attach and detach across servers in seconds

πŸ”—

Private networking

Free VLANs between your VPS instances within a region

πŸš€

GPU dedicated

NVIDIA L40S on-demand in Frankfurt, Dallas, and Tokyo

Working with us

We're a small but growing team β€” roughly 35 engineers across 9 time zones. We're remote-first, fully distributed, and currently hiring SREs, network engineers, and one more support engineer who likes cats. If you're interested, email hello@olivevps.com with your background and a cat picture.

We also love hearing from customers about what we should build next. Some of our best features started as a single email from someone who said "wouldn't it be cool if you could..." If you have an idea, send it our way. We read every email.

The best hosting company is the one you forget about β€” until you need to email them, and then they answer in seven minutes.

β€” how we measure success

Our infrastructure philosophy

We believe in owning the things that determine quality. We own our hardware β€” every server in every region was specced, purchased, racked, and configured by our own engineers. We don't white-label someone else's cloud. We don't resell capacity from larger providers. When something breaks, we know exactly what's broken and exactly how to fix it, because we built it.

This approach is more expensive than the alternative. A reseller can launch in a new region by buying capacity from a hyperscaler; we have to negotiate colocation, ship hardware, and burn-in for two weeks before opening to customers. But the difference is visible in performance, in reliability, and in our ability to make changes. When we want to roll out 10 Gbps networking to a region, we just do it. When we want to add nested virtualization, we just push a kernel update. There is no upstream vendor blocking us.

Why we're remote-first

Our team is distributed across nine time zones, from Berlin to Auckland. This started as a practical choice β€” to provide 24-hour support coverage without anyone working night shifts β€” but it's become a core part of how we operate. We've never had a physical headquarters. We meet in person twice a year, in cities we vote on. The rest of the time we work from wherever home is.

Distributed teams force you to write things down. Decisions go in shared documents. Architecture choices go in design docs. On-call rotations go in PagerDuty. The result is a company with extremely good institutional memory and very few "tribal knowledge" gaps. New engineers get productive in a week because everything is documented, and customers benefit because the engineer answering their ticket on Tuesday at 3am has access to the same context as the engineer who solved a similar issue three months ago.

How we handle outages

Outages happen. Networks flap. Hardware dies. Software has bugs. The question isn't whether you'll have outages β€” every host does β€” it's how you handle them when they occur.

Our policy is radical transparency. When a region has degraded performance, we publish a status page incident within minutes of detection. When the issue is resolved, we publish a post-mortem within 72 hours. The post-mortem describes what broke, what we did about it, what the root cause was, and what we're changing to prevent recurrence. We don't blame "third parties" or "DDoS attacks" unless those were genuinely the cause. We don't hide details. Customers consistently tell us our post-mortems are the best part of how we handle problems.

SLA credits are issued automatically. You don't need to file a ticket, fill out a form, or argue with our support team. If a region misses 99.99% uptime in a calendar month, every customer in that region gets a credit on their next invoice β€” and an email explaining why.

How we hire and keep good engineers

The single biggest factor in our service quality is the quality of our engineering team. We've been deliberate about how we hire and how we keep people happy.

Hiring is slow on purpose. We typically interview 30-50 candidates for every engineer we hire, and we won't lower the bar to fill a seat faster. Every engineer has on-call responsibilities, support responsibilities, and infrastructure responsibilities β€” there's no separation between "feature engineers" and "operations engineers." This means everyone we hire needs to be operationally fluent, which dramatically narrows the candidate pool.

Compensation is above market. We pay senior engineering rates because the work requires senior engineering judgment. We don't have a tiered support team where Tier 1 is junior and Tier 3 is senior; everyone is senior. This is expensive and we believe it's worth it.

On-call is reasonable. Our on-call rotations are 7 days every 6 weeks, and the on-call engineer gets compensatory time off after rotations with significant incidents. We aggressively automate things that wake people up β€” if an incident has happened twice, we either fix the underlying cause or build automatic remediation.

What's hard about this business

People sometimes assume hosting is easy because servers are commoditized. The reality is the opposite. Server hardware is largely commoditized; everything around it β€” networking, power, cooling, peering, customer support, billing, abuse handling β€” is where the difficulty actually lives.

The hardest things we deal with operationally:

None of this is glamorous. None of it is what marketing pages talk about. All of it determines whether your servers are actually fast and reliable. We spend most of our engineering time on these unglamorous problems because that's where the actual quality lives.

Our take on the future of hosting

The hosting industry is consolidating in two directions simultaneously. At the high end, three hyperscalers (AWS, Google, Microsoft) capture an ever-larger share of enterprise workloads through service breadth and ecosystem lock-in. At the low end, a handful of automated VPS providers (DigitalOcean, Vultr, Linode, Hetzner) capture price-sensitive developers through self-service simplicity.

We're betting that there's room for a third category: opinionated, engineering-led, mid-market hosting. Customers who need more performance and reliability than the cheapest VPS can deliver, but who don't need (or can't afford) the complexity of a hyperscaler. This is the segment we serve, and based on our growth, it's a real and underserved market.

We expect to add more regions, more products (object storage, managed Kubernetes, GPU instances), and more capabilities over the next few years. We don't expect to ever become a hyperscaler. We're not optimizing for that. We're optimizing for being the best version of what we already are: a hosting company that engineers actually like.

What customers say about working with us

The most consistent feedback we get from customers is that switching to OliveVPS feels like switching from a 1990s-era support experience to something genuinely modern. Tickets get answered. Engineers actually engage with the technical substance of questions. Outages get explained instead of glossed over. None of this should feel revolutionary, but for many customers coming from large incumbents, it does.

Customers also tell us that our pricing predictability is a quiet superpower. CFOs hate surprise bills, and our flat monthly pricing means there are no surprises. The same is true for engineering teams who don't want to spend their week diagnosing why this month's hyperscaler bill is 40% higher than last month's.

Sustainability and energy practices

Data centers are energy-intensive, and we take seriously our responsibility to operate efficiently. All of our European facilities run on 100% certified renewable energy. Our North American facilities have committed to 100% renewable by end of 2026. Our Asia-Pacific facilities are at varying stages depending on local grid composition; we publish region-by-region carbon intensity numbers on our public sustainability page.

We also run efficient hardware. Modern AMD EPYC and Intel Xeon CPUs deliver 3-5Γ— better performance per watt than the hardware they replace. Our data centers run hot-aisle/cold-aisle containment with PUE values consistently between 1.2 and 1.4. We don't run heating in our facilities β€” we use waste server heat to warm office spaces in our Frankfurt and Helsinki sites.

Frequently asked questions about OliveVPS

Where is OliveVPS legally registered?

Olive Hosting Inc. is a Delaware C-corporation registered in the United States. Our European operations are conducted by Olive Hosting Europe GmbH, registered in Frankfurt, Germany. Our Indian operations are conducted by Olive Hosting India Pvt. Ltd., registered in Bangalore. We have entity registrations in every region where required by local law.

Is OliveVPS profitable?

Yes, since Q3 2025. We're not venture-capital-funded and have never raised external capital. The company was founded with the founders' personal savings and has been self-sustaining from operating revenue since month 18. This means we're free to make long-term decisions about quality and customer experience without quarterly growth pressure from investors.

Does OliveVPS work with resellers or partners?

We have a small partner program for managed service providers and web agencies who deploy multiple servers per month for client work. Email partnerships@olivevps.com if you're interested. We're not currently accepting consumer-grade affiliate marketers.

Can I tour an OliveVPS data center?

For security reasons, we don't offer customer tours of our data center facilities. The colocation operators we partner with (Equinix, Telehouse, Interxion, NextDC, etc.) typically don't permit visitors who aren't named on the access list either. We do publish detailed information about each facility's certifications, location, and operator on the relevant location page.

Where can I follow OliveVPS announcements?

Our engineering blog at blog.olivevps.com is the canonical source for product announcements, post-mortems, and infrastructure deep-dives. We also post on Twitter (@olivevps), Mastodon (@olivevps@hachyderm.io), and LinkedIn. Status updates are at status.olivevps.com.

Want to work with us?

We're hiring SREs, network engineers, and one more support engineer who likes cats.

Get in Touch β†’