Android Root & Tampering Detection — Play Integrity, RootBeer, and the Cat-and-Mouse Reality
Root detection on Android is a cat-and-mouse game where the cat keeps losing. Every check you write, someone has already written a Magisk module to defeat. Every commercial library you license, the bypass is on XDA forums within weeks. And on top of that, OEM modifications — especially in some Asian markets — ship phones that are effectively rooted from the factory, indistinguishable to your detection code from a deliberately-tampered device.
And yet, banking apps still need to do something. Regulators ask. Audit reports require it. Risk teams want a signal. The question isn’t whether to do root detection — it’s how to do it without (a) breaking your app on legitimate devices, (b) burning engineering time on checks that get bypassed in hours, or (c) pretending you have certainty you don’t have.
This post walks through root detection as the adversarial game it actually is. We’ll start with naive checks, see exactly how each one gets bypassed, escalate, see those bypassed too, then arrive at the only real answer: Play Integrity API for the strongest available signal, layered with custom heuristics for defense in depth, and server-side risk scoring that doesn’t rely on client honesty. Plus the OEM-modified-Android reality that complicates every assumption above. Continuing the banking-app frame from the Security foundation post.
What “Rooted” Even Means in 2026
The term gets thrown around imprecisely. Three categories matter:
1. Traditional rooting — user installed Magisk, Xposed, KernelSU, or a similar framework that gives system-level access. Magisk is dominant; it specifically advertises “hide from detection” capabilities and ships modules that systematically defeat root checks.
2. Rooted-from-factory — some OEM builds in certain markets ship with effective root access. Custom Chinese ROMs, certain Huawei/Honor builds post-Google sanctions, some industrial Android tablets. The user didn’t do anything; the device just is what it is.
3. Tampered without root — the app itself has been modified (decompiled, repackaged with malicious code, re-signed), running on an otherwise-stock device. No root needed; the threat is the modified APK. Frida and similar dynamic instrumentation tools sometimes work without root on certain Android versions.
Most “root detection” libraries actually mix detection of all three. That’s fine, but worth being explicit when you write your risk policy: “reject all rooted devices” means rejecting category #2 too, which can mean rejecting a meaningful percentage of users in some markets. Whether that’s acceptable is a product decision, not a security decision.
Round 1: Naive Detection (Bypassed in Seconds)
The first-iteration root detection most developers write looks like this:
fun isDeviceRooted(): Boolean {
// Check 1: does the “su” binary exist in common locations?
val suPaths = arrayOf(
“/system/bin/su”,
“/system/xbin/su”,
“/sbin/su”,
“/system/app/Superuser.apk”
)
for (path in suPaths) {
if (File(path).exists()) return true
}
// Check 2: can we actually execute “su”?
return try {
Runtime.getRuntime().exec(“su”)
true
} catch (e: Exception) {
false
}
}
This catches casual rooting from 2014. It catches almost nothing today. The bypasses:
(1) Magisk Hide / DenyList hides the su binary from any process you specify. Your banking app sees a clean filesystem; the user has full root.
(2) The File.exists() check is a syscall; Magisk hooks it for your process and returns false. The file is there; your app can’t see it.
(3) Process.exec() goes through the same hooks — your exec("su") call returns a process that does nothing instead of the real su.
Time-to-bypass: literally seconds. Magisk users have a checkbox UI: “hide from these apps,” tick your bank’s package name, done.
Round 2: Less Naive Detection (Bypassed in Minutes)
Knowing the obvious checks fail, developers escalate. Common second-round patterns:
fun isDeviceRooted(): Boolean {
// Check 1: build tags — userdebug builds often ship to test devices and rooted phones
if (Build.TAGS?.contains(“test-keys”) == true) return true
// Check 2: dangerous app packages installed
val rootApps = arrayOf(
“com.topjohnwu.magisk”,
“eu.chainfire.supersu”,
“com.koushikdutta.superuser”,
“com.thirdparty.superuser”
)
val pm = context.packageManager
for (pkg in rootApps) {
try {
pm.getPackageInfo(pkg, 0)
return true
} catch (e: PackageManager.NameNotFoundException) {
// not installed, continue
}
}
// Check 3: writable system partitions
val systemPaths = arrayOf(“/system”, “/system/bin”, “/system/xbin”)
for (path in systemPaths) {
if (File(path).canWrite()) return true
}
// Check 4: check for known dangerous binaries
val dangerousBinaries = arrayOf(“busybox”, “magisk”, “frida-server”)
for (binary in dangerousBinaries) {
if (File(“/data/local/tmp/$binary”).exists()) return true
}
return false
}
This is where most home-grown root detection stops. It’s also bypassed routinely:
(1) Build.TAGS check is an easy patch. Magisk modifies the value returned to specific apps. There’s a Magisk module specifically for spoofing build properties.
(2) Package name checks are defeated by “hide app from package list” features in Magisk. Your pm.getPackageInfo call simply returns NameNotFoundException for the hidden package.
(3) Writable system partition check is bypassed because Magisk intentionally doesn’t make /system writable — it uses a magisk overlay that makes the partition appear immutable to your checks while still allowing privileged operations.
(4) Binary path checks are bypassed by the same file-system hooks. The binaries exist; your app can’t see them.
The pattern: every time we write a check, we’re asking a question the OS gives us an answer to. The rooted device controls the answer. We’re asking the fox if there are any chickens in the henhouse.
Round 3: RootBeer and Commercial Libraries (Slower to Bypass, Still Bypassable)
Most banking apps ship with RootBeer or a commercial equivalent (Promon SHIELD, DexProtector, Appdome). These run dozens of checks across multiple categories.
// Conceptual usage of RootBeer
val rootBeer = RootBeer(context)
if (rootBeer.isRooted) {
// Strong signal but not definitive
}
// RootBeer runs checks like:
// - detectRootManagementApps() — package detection
// - detectPotentiallyDangerousApps() — Frida server, Xposed
// - detectTestKeys() — build tags
// - checkForBusyBoxBinary()
// - checkForSuBinary()
// - checkSuExists() — actually invoke su
// - checkForRWPaths() — writable system paths
// - checkForDangerousProps() — system properties (ro.debuggable, ro.secure)
// - checkForRootNative() — calls a native check that’s harder to hook
// Plus a native (NDK) check that runs the same logic but in C++,
// making it slower for Magisk modules to comprehensively hook
RootBeer’s contribution over hand-rolled checks: the native code path. Hooking a Java method is trivial for Magisk; hooking a native function requires more sophisticated tooling. Forces the bypass author to do more work.
The bypasses still exist:
(1) Magisk modules specifically targeting RootBeer’s native checks — they patch the JNI return values so the native function returns “not rooted” even though the underlying device is.
(2) Decompilation of your APK reveals you’re using RootBeer; the bypass is essentially a one-line module that hooks the specific package’s methods.
(3) Frida and other dynamic instrumentation tools can patch any return value at runtime.
The honest assessment: RootBeer raises the cost of bypass from “trivial” to “requires Magisk module knowledge”. That’s a meaningful improvement against casual attackers but not against motivated ones. It also generates false positives: some legitimate devices (custom ROMs that aren’t rooted but use modified Android, certain OEM builds) trip RootBeer’s checks. False positive rate in production at a banking app I worked on: about 0.3% of users. That’s thousands of legitimate users locked out per million.
Round 4: Play Integrity API (The Strongest Signal Available)
The fundamental problem with all client-side checks: the rooted device controls the answer. The fundamental fix: get the answer from somewhere the rooted device can’t control. That’s what Play Integrity API does.
The flow:
- Your app requests an integrity verdict from Google Play Services
- Play Services performs hardware-attested checks — signed by the device’s TEE/StrongBox
- The verdict is a signed JWT containing fields like
deviceIntegrity(MEETS_DEVICE_INTEGRITY, MEETS_BASIC_INTEGRITY, MEETS_STRONG_INTEGRITY) andappIntegrity(PLAY_RECOGNIZED, UNRECOGNIZED_VERSION, UNEVALUATED) - The verdict is sent to your server, which verifies the signature against Google’s public key
- Server makes the trust decision based on the verdict
Why this is structurally better than RootBeer:
Hardware attestation. The integrity verdict is signed by a key in the device’s TEE that the OS — even a rooted OS — cannot extract. Magisk can hide root from your app; it cannot forge Google’s signature on a verdict.
Server-side verification. Your server doesn’t trust the client’s claim about the verdict; it verifies the JWT signature against Google’s public key. The rooted device can intercept the verdict in transit, but it can’t forge a new one.
Categories of trust. The verdict gives you graduated levels: MEETS_STRONG_INTEGRITY (locked bootloader, verified boot, certified Play Protect), MEETS_DEVICE_INTEGRITY (Play-certified device), MEETS_BASIC_INTEGRITY (just a generally Android-compatible device). You can apply different policies based on which level the device meets.
// Client side: request the verdict
val integrityManager = IntegrityManagerFactory.create(context)
val nonce = generateServerNonce() // Get from your server
val request = IntegrityTokenRequest.builder()
.setNonce(nonce)
.build()
integrityManager.requestIntegrityToken(request)
.addOnSuccessListener { response ->
val token = response.token() // JWT string
// Send the token to YOUR SERVER for verification
sendTokenToServer(token)
}
.addOnFailureListener { exception ->
// Failed to even get a verdict — itself a signal
// Could be: no Play Services, very old Android, network issue,
// or attacker stripped Play Services from the device
handleIntegrityFailure(exception)
}
Server-side verification (in your backend, not the Android code):
// Conceptual server pseudocode (Kotlin/JVM backend)
fun verifyIntegrityToken(token: String): IntegrityVerdict {
// Decrypt and verify the JWT against Google’s public key
// (using the Play Integrity decryption keys you configure in Play Console)
val decoded = JWT.decode(token)
verifySignature(decoded, googlePublicKey)
val payload = decoded.getClaim(“tokenPayloadExternal”).asObject()
return IntegrityVerdict(
deviceIntegrity = payload.deviceIntegrity,
// [“MEETS_DEVICE_INTEGRITY”] or [“MEETS_BASIC_INTEGRITY”] or empty
appIntegrity = payload.appIntegrity,
// PLAY_RECOGNIZED | UNRECOGNIZED_VERSION | UNEVALUATED
accountDetails = payload.accountDetails,
// LICENSED | UNLICENSED | UNEVALUATED — is this a Play-purchased install
nonceMatches = payload.requestDetails.nonce == expectedNonce
)
}
The nonce is critical. Without it, an attacker can replay a previously-captured verdict from a clean device. The nonce binds the verdict to this specific request at this specific time; replay is impossible because your server only accepts a verdict for a nonce it just issued.
What Play Integrity Doesn’t Tell You
Play Integrity isn’t omniscient. Limitations:
1. It’s a Google Play Services API. Devices without Google Play Services (Huawei post-sanctions, Chinese-market builds, some custom ROMs) can’t use it at all. You get failure to obtain a verdict, which itself is a signal — but a noisy one because some legitimate users are in this bucket.
2. The integrity levels are binary-ish. “Meets device integrity” or doesn’t. There’s no “75% confidence the device is rooted.” This forces you into a hard accept/reject decision.
3. Magisk + Zygisk + DenyList still occasionally beat it. The cat-and-mouse game continues. New Magisk modules periodically claim Play Integrity bypass; Google updates their attestation; the cycle repeats. The bar is “motivated and technically capable attacker with a recent module,” not “casual user with the Magisk app.”
4. Rate limits. Play Integrity has request quotas. You can’t check on every API call; design for one verdict per session or critical operation.
The Layered Approach That Actually Works
The honest answer is no single signal is sufficient. The pattern that works:
┌──────────────────────────────────────────────────────────────────────┐
│ Layer │ What it catches │ What it costs │
├────────────────────────────────┼──────────────────┼──────────────────┤
│ Play Integrity API │ Most rooted / │ Excludes non-GMS │
│ │ tampered devices │ devices │
├────────────────────────────────┼──────────────────┼──────────────────┤
│ RootBeer (or commercial lib) │ Casual root, │ ~0.3% false │
│ │ adds defense in │ positive rate │
│ │ depth │ │
├────────────────────────────────┼──────────────────┼──────────────────┤
│ Custom heuristics tailored │ Specific bypass │ Maintenance cost │
│ to your threat model │ patterns we’ve │ as bypasses │
│ │ seen exploit us │ evolve │
├────────────────────────────────┼──────────────────┼──────────────────┤
│ Server-side risk scoring │ Behavior anomalies│ Real engineering │
│ (device fingerprint, IP, time, │ that no client │ + ML investment │
│ velocity, transaction shape) │ check can fake │ │
├────────────────────────────────┼──────────────────┼──────────────────┤
│ Server-side enforcement of │ EVERYTHING that │ Engineering │
│ business rules │ matters for $ │ discipline │
└──────────────────────────────────────────────────────────────────────┘
The bottom row is the load-bearing one. Client-side checks raise the cost of attack and produce risk signals; the server’s job is to make the decision. A rooted device that requests a $5 transfer is treated differently from a rooted device requesting $50,000. A rooted device that’s been a customer for three years and is logging in from their usual IP is treated differently from a brand-new device on a VPN.
Senior signal: framing root detection as signal generation, not gatekeeping. The client says “here’s my best assessment of integrity”; the server combines that with everything else it knows and makes the call. This is how mature financial apps actually work, and it’s why “reject all rooted devices” is rarely the right policy.
The Policy Matrix — Not Every Operation Needs the Same Bar
Differentiate by operation:
┌──────────────────────────────────────────────────────────────────────┐
│ Operation │ Policy on integrity failure │
├────────────────────────────┼──────────────────────────────────────────┤
│ Browse account balance │ Allow with warning banner │
│ View transaction history │ Allow with warning banner │
│ Update profile, pref │ Allow │
│ Bill pay (recurring, │ Allow (low fraud risk, easy reverse) │
│ known payee) │ │
├────────────────────────────┼──────────────────────────────────────────┤
│ Transfer to known account │ Allow with step-up auth (biometric) │
│ Add new payee │ Step-up auth + 24h cooldown │
├────────────────────────────┼──────────────────────────────────────────┤
│ Transfer > $5,000 │ Reject if integrity fails AND risk high │
│ International wire │ Reject if integrity fails (period) │
│ Card setting changes │ Reject if integrity fails │
└──────────────────────────────────────────────────────────────────────┘
A blanket “reject all rooted devices” policy excludes legitimate users (developers, power users, factory-rooted device owners) from low-risk operations they have every right to perform. A graduated policy maintains security where it matters while not punishing users for having a rooted device they use to read books.
The product trade-off: stricter policy reduces fraud at the cost of false-positive lockouts. The right balance is a business decision informed by the actual fraud rate and the user-experience cost of lockouts. Don’t let an engineer set this policy in isolation; loop in fraud, product, and customer support.
Tampering Detection (Different From Rooting)
An attacker doesn’t need to root the device to attack you — they can repackage your APK with malicious code, sign it with their own key, and distribute it. Different threat, different defense.
The signature check:
fun verifyAppSignature(context: Context): Boolean {
val expectedSignatureSha256 = “abc123...” // Your release signing certificate hash
val packageInfo = if (Build.VERSION.SDK_INT >= 28) {
context.packageManager.getPackageInfo(
context.packageName,
PackageManager.GET_SIGNING_CERTIFICATES
)
} else {
context.packageManager.getPackageInfo(
context.packageName,
PackageManager.GET_SIGNATURES
)
}
val signatures = if (Build.VERSION.SDK_INT >= 28) {
packageInfo.signingInfo?.apkContentsSigners
} else {
@Suppress(“DEPRECATION”)
packageInfo.signatures
} ?: return false
for (signature in signatures) {
val md = MessageDigest.getInstance(“SHA-256”)
val digest = md.digest(signature.toByteArray())
val hex = digest.joinToString(“”) { “%02x”.format(it) }
if (hex == expectedSignatureSha256) return true
}
return false
}
If your APK is repackaged and re-signed, the signature won’t match. Useful, with a critical caveat: this check itself can be patched out in a tampered APK. The attacker doesn’t need to forge your signature; they just modify the function to always return true.
Defenses against this circular problem:
(1) Move the check into native (NDK) code where bytecode patching doesn’t apply.
(2) Cross-check the signature on the server — the server can refuse to honor requests from APKs whose claimed package doesn’t match the integrity verdict’s package or whose Play Integrity verdict says UNRECOGNIZED_VERSION.
(3) Ship the check obfuscated with R8 + ProGuard rules that prevent inlining of trivial methods (so the bypass isn’t a one-line change).
(4) Use Play Integrity’s appIntegrity field, which checks whether the app on the device is the same one Google distributed. UNRECOGNIZED_VERSION means tampered or repackaged.
Frida and Dynamic Instrumentation Detection
Frida is the tool serious attackers use. It hooks into your running process and lets the attacker inspect memory, modify return values, replace functions, all without rooting the device on some Android versions. If your app is high-value enough, Frida is in your threat model.
Detection patterns:
// 1. Check for known Frida processes
fun isFridaRunning(): Boolean {
return try {
File(“/proc/”).listFiles()?.any { dir ->
if (dir.isDirectory && dir.name.toIntOrNull() != null) {
val cmdline = File(“${dir.path}/cmdline”).readText()
cmdline.contains(“frida-server”) || cmdline.contains(“frida-agent”)
} else false
} ?: false
} catch (e: Exception) {
false
}
}
// 2. Check for Frida’s default port (27042) being open
fun isFridaPortOpen(): Boolean {
return try {
Socket(“127.0.0.1”, 27042).use { true }
} catch (e: Exception) {
false
}
}
// 3. Scan loaded libraries for Frida’s injected library
fun isFridaLibraryLoaded(): Boolean {
return try {
val mapsFile = File(“/proc/self/maps”).readText()
mapsFile.contains(“frida”) ||
mapsFile.contains(“gum-js-loop”) ||
mapsFile.contains(“gmain”)
} catch (e: Exception) {
false
}
}
All of these can be hooked and bypassed by a sufficiently sophisticated attacker. They raise the cost from “run Frida out of the box” to “configure Frida to hide from these specific checks.” That’s the realistic ceiling.
The honest position: Frida detection is mostly signal-generation, not blocking. If detection fires, increase the risk score and trigger step-up authentication on the next sensitive operation. Don’t hard-block, because false positives exist (some legitimate debugging tools share characteristics).
The OEM Reality — Why This Is Genuinely Hard
Everything above implicitly assumes “normal Android.” The OEM-modified reality breaks several assumptions:
1. Some OEM builds ship without standard Play Services. Huawei devices post-2019 (in many markets), some Chinese-market builds, certain industrial Androids. Play Integrity API is unusable on these — not “sometimes fails” but “impossible to call.” You need a fallback policy: do you treat absence of Play Integrity as “rooted”? That excludes legitimate Huawei users. Do you treat it as neutral? You’ve given attackers a workaround.
The pragmatic answer most fintechs land on: treat absence of Play Services as a different risk category — not auto-reject, but auto-flag for additional server-side scrutiny. Combined with strong KYC and transaction monitoring, the risk is manageable.
2. Some OEM builds modify Build properties. Custom ro.build.tags, modified ro.product.model strings, changed kernel version reporting. Your custom heuristics will trip on these. RootBeer’s test-keys check fails on certain rooted-from-factory builds even though Magisk isn’t installed.
3. Some Chinese OEM versions of Android effectively run with elevated privileges by default. Auto-start, background process limits bypassed, system-level access for OEM apps. From your detection’s perspective, these look adversarial; from the user’s perspective, that’s just how their phone works.
4. KernelSU and rootless rooting are newer techniques that don’t require traditional Magisk. Your detection logic that searches for Magisk specifically misses these. The cat-and-mouse game generates new techniques continuously.
The strategic takeaway: your root detection is a probabilistic risk signal, never a binary truth. The product policy needs to acknowledge this. A blanket policy of “detect rooted, lock account” will generate enough false positives in markets with custom OEM builds to be a customer-support nightmare.
A Realistic Implementation
Putting it all together for a banking app, the architecture I’d ship:
class IntegrityCheck @Inject constructor(
private val playIntegrityClient: IntegrityManager,
private val rootBeer: RootBeer,
private val signatureVerifier: SignatureVerifier,
private val fridaDetector: FridaDetector,
private val api: IntegrityApi
) {
suspend fun assessIntegrity(operation: SensitiveOperation): IntegrityResult {
val signals = mutableListOf<IntegritySignal>()
// Layer 1: Play Integrity (strongest signal, ask the server)
val playToken = try {
requestPlayIntegrityToken()
} catch (e: Exception) {
signals.add(IntegritySignal.PlayIntegrityUnavailable(e.message))
null
}
// Layer 2: Local heuristics (defense in depth, even if Play passes)
if (rootBeer.isRooted) signals.add(IntegritySignal.RootBeerPositive)
if (!signatureVerifier.verify()) signals.add(IntegritySignal.SignatureMismatch)
if (fridaDetector.isFridaPresent()) signals.add(IntegritySignal.FridaDetected)
// Send everything to the server — let it decide based on
// the verdict, the local signals, the operation, and other risk factors
return api.evaluateIntegrity(IntegrityRequest(
playToken = playToken,
localSignals = signals,
operation = operation,
sessionId = currentSessionId
))
}
}
// The server returns the verdict; client just enforces it
sealed class IntegrityResult {
object Allow : IntegrityResult()
data class StepUpRequired(val authMethod: AuthMethod) : IntegrityResult()
data class Reject(val userMessage: String, val supportCode: String) : IntegrityResult()
}
Notice what the client doesn’t do: it doesn’t make the trust decision. It collects signals, sends them to the server with the operation context, gets back a policy decision, enforces it. The server is where the decision lives because the server has visibility the client doesn’t (account history, fraud patterns, current threat intel) and because the server can’t be hooked by Magisk.
When to Stop Caring
For some apps, root detection is over-engineering. Honest categories:
Banking, payments, regulated fintech: mandatory. Audit will require it. Implement Play Integrity + RootBeer + server-side risk scoring as described above. Accept the false-positive rate as a business cost.
Healthcare with PHI, enterprise apps with sensitive corporate data: recommended. The threat model includes adversaries; the cost of a single breach is much higher than the engineering cost of these defenses.
Consumer apps, social, e-commerce, content: usually not worth it. The threat model is mostly “someone might cheat at the game” or “someone might bypass the paywall.” Server-side validation of business rules covers most of this; root detection is theatre.
Games: depends on the genre. Competitive PvP with real money on the line, yes. Single-player puzzle game, no.
The general principle: root detection is a tool with real costs (engineering time, false positives, customer support burden). Apply it where the threat justifies the cost. For an Android engineer at a banking app, this is your problem; for an Android engineer at a recipe app, it isn’t.
Closing
The honest summary of root detection in 2026: Play Integrity API is the strongest available signal, RootBeer adds defense in depth at modest false-positive cost, and server-side risk scoring is the actual security boundary. Layer them, don’t rely on any single one, and accept that your detection is a probability function, not a truth function.
For OEM-modified reality — the rooted-by-factory devices, the missing Play Services, the custom Android builds — the answer isn’t harder client checks. It’s graduated risk policy at the server, treating “cannot determine integrity” as a different bucket than “clean device” or “tampered device,” and applying step-up authentication or scrutiny accordingly.
The cat-and-mouse game won’t end. Magisk modules will keep being written. New rooting techniques will keep emerging. Your job isn’t to win the game permanently — it’s to raise the cost of attack high enough that motivated attackers go elsewhere and casual attackers can’t bother. That’s a different bar than “detect every rooted device,” and it’s the realistic one.
Next in the Security cluster: secure file upload to a server (encryption at rest, in transit, signed URLs, integrity verification — the engineering for high-stakes file transfer). After that: biometric authentication deep dive, then OWASP MASVS compliance.
Happy coding!
Comments (0)
Sign in to leave a comment.
No comments yet. Be the first to share your thoughts.