Someone Turned My VPS Into a Crypto Miner (And How I Fixed It)

My server got hacked by crypto miners hiding in memory. Here's how I found them, saved my data, and locked the door.

It started with a slow app

I was just chilling, working on something else, when a friend messaged me:

“Hey, your app is super slow. Like, really slow. Is something wrong?”

I didn’t think much of it at first. Maybe a database query was acting up. Maybe there was a memory leak. Normal stuff.

Then I checked my VPS dashboard.

CPU usage: 95%.

For the past few days.

That’s not normal. My little 2GB droplet usually sits at 5-10% CPU running a few Node.js apps and some databases. 95% means something is working really hard.

Too hard.

I logged into my server to see what was going on.


The first red flags

I ran top to see what was eating my CPU.

This is what I saw:

pipewire - 34% CPU
xdg-bdus - 51% CPU

Wait a second.

pipewire is an audio server. For desktops. Why is that on my headless Ubuntu server?

And xdg-bdus? That’s not even a real thing. It looks like someone typed xdg-dbus wrong. That’s a desktop service too.

I don’t have a desktop. I don’t have audio. I don’t have any of that stuff.

Someone else put it there.

I got hacked.


The malware was hiding in plain sight

I started looking around.

First, I checked what else was running:

ps aux | grep pipewire

Found multiple processes. And not just pipewire - there were also multiple copies of a script called alive.sh running from /tmp/runnv/.

alive.sh. That sounds like a watchdog script. Something that makes sure the miner keeps running even if you kill it.

Then I checked crontab for the postgres user:

sudo crontab -u postgres -l
* * * * * /var/lib/postgresql/12/main/postmaster

Every single minute, something runs from that location. postmaster is normally a PostgreSQL process, but this wasn’t in the right directory.

I checked what that file actually was. It wasn’t PostgreSQL. It was the miner.

But here’s where it got weird.


They broke PostgreSQL on purpose

I tried to check if PostgreSQL was even running.

sudo systemctl status postgresql

It said “active (exited)”. That’s strange. PostgreSQL shouldn’t be “exited” - it should be “running”.

I looked at the service file:

sudo systemctl cat postgresql
ExecStart=/bin/true

What.

They replaced the actual PostgreSQL startup command with /bin/true. That command does nothing and exits successfully. So the service shows as “active” but PostgreSQL never actually starts.

Why would they do that?

Two reasons:

  1. It hides the fact that PostgreSQL is broken
  2. It forces their fake postmaster (the miner) to look like the real PostgreSQL process

These guys were smart. This wasn’t some random script kiddie.


The fileless nightmare

I killed the miner processes. Deleted the cron jobs. Removed the binaries from /tmp/runnv/.

A few minutes later, they were back.

I killed them again. They came back again.

I was stuck in a loop.

Then I looked deeper at one of the running processes:

sudo ls -l /proc/4177314/exe
lrwxrwxrwx → '/memfd: (deleted)'

/memfd/. That’s a Linux feature that lets you run programs entirely from memory. No files on disk. Nothing to delete.

This was fileless malware. It loads itself into RAM and runs from there. Traditional antivirus and file scanners won’t find it.

Something else was respawning it, but I couldn’t find what. I checked crontabs for all users. I checked systemd timers. I checked init scripts. Nothing obvious.


The breakthrough moment

I noticed a pattern after watching the processes for a while.

Every single suspicious process ran as the postgres user. Not root. Not my user. Always postgres.

So I thought: what happens if the postgres user can’t run anything?

sudo usermod -s /usr/sbin/nologin postgres
sudo usermod -L postgres

The first command changes their shell to nologin - they can’t start a session. The second locks the account entirely.

I watched top like a hawk.

CPU dropped from 95% to 0.5% within seconds.

No more pipewire. No more xdg-bdus. No more 4 process. All gone.

The miner couldn’t run because the user it depended on was locked.


But now I had a new problem

PostgreSQL also runs as the postgres user.

By locking that user, I also broke my own database.

I couldn’t start PostgreSQL normally. The service was already broken by the hackers (remember ExecStart=/bin/true?), and now the user was locked too.

But I needed my data. My apps needed those databases.

I had to find a way to start PostgreSQL just once, grab everything, then shut it down.


The rescue operation

First, I fixed the file permissions. The hackers had messed with them.

sudo chown -R postgres:postgres /var/lib/postgresql/12/main

Then I started PostgreSQL manually, bypassing the broken systemd service completely. I used a different port too, just in case something was still watching the default port.

sudo -u postgres /usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main -c port=5433 &

I waited a few seconds. Held my breath.

Then I tried connecting:

sudo -u postgres psql -p 5433 -c "\q"

No error. It worked.

I quickly dumped everything:

sudo -u postgres pg_dumpall -p 5433 > all_databases.sql

47MB. All my data. Safe.

Then I killed PostgreSQL and locked the postgres user again.


How they actually got in

I checked my firewall rules. Or rather, the lack of them.

I never set up a firewall. I thought “I’ll do it later” and never did.

Port 5432 (PostgreSQL) was wide open to the internet. And I was still using the default password for the postgres user.

Anyone scanning for open PostgreSQL ports could find my server, log in with postgres/postgres, and do whatever they wanted.

That’s exactly what happened.

Once they had access, they:

  1. Installed the miner as a fileless process running from memory
  2. Added a cron job running every minute to make sure the miner stays alive
  3. Modified the PostgreSQL systemd service to hide the fact that it was broken
  4. Used the postgres user to run everything so it looked somewhat legitimate

If I hadn’t locked that user, I might still be fighting this thing.


What I learned from all this

A few things.

First, close your ports. Seriously. A basic firewall would have prevented this entire mess. One line: sudo ufw deny 5432. That’s it.

Second, change default passwords. The postgres user password was still the default. I knew better. I just didn’t do it.

Third, monitor your servers. If I had alerts for high CPU, I would have caught this days earlier. A simple cron job that emails you when CPU goes above 80% is better than nothing.

And the weird trick that saved me: Lock the user the malware runs under. If you see suspicious processes all running as the same user, lock that user temporarily. If CPU drops, you found your problem. Then you can figure out how to get your data out.


What I’m doing now

I’m rebuilding the server from scratch.

The old one can’t be trusted. Fileless malware is tricky - even if I think I removed everything, there could be something hiding somewhere. A kernel module. A modified binary. A backdoor I didn’t find.

So fresh VPS. Fresh Ubuntu install. Fresh PostgreSQL.

I already have my backups downloaded to my local machine. The data is safe.

On the new server, I’m doing things differently:

  • sudo ufw default deny incoming - block everything by default
  • sudo ufw allow ssh - only allow SSH from outside
  • sudo ufw deny 5432 - PostgreSQL stays behind the firewall
  • listen_addresses = 'localhost' in PostgreSQL config
  • Strong password for postgres user. Like, actually strong this time.
  • Daily automatic backups to remote storage
  • A simple monitoring script that checks CPU every 5 minutes

For anyone dealing with this right now

If you’re reading this because your server is also mining crypto, here’s what to do:

Step 1: Find the user running the miner processes (ps aux | grep -E "miner|cpu|pipewire")

Step 2: Lock that user temporarily (sudo usermod -L username)

Step 3: Watch if CPU drops. If it does, you found your culprit.

Step 4: If that user runs important services (like postgres), find a way to get your data out before keeping it locked forever.

Step 5: Rebuild. Don’t try to clean an infected server. Just rebuild.

Step 6: Set up a firewall this time. Please.


The end (and a new beginning)

My old server is still sitting there in my provider’s dashboard. 1375 days of uptime. Gone.

But honestly? That uptime number was just something to brag about. It didn’t mean my server was secure. Clearly.

Now I have a clean server, proper security, and a good backup.

And I have a story to tell.

Cost of this lesson: one evening of panic, a lot of coffee, and a VPS rebuild.

Value of this lesson: I’ll never leave a port open again.


Written from a new VPS. With firewalls. And strong passwords. And backups.

Don’t be like old me. Close your ports.