A low success rate means the solver is returning tokens but the target site is rejecting them. This is different from a timeout — you're getting a response, but the automation still fails. Each cause has a different fix.
For a full troubleshooting overview, see the CAPTCHA Solver Troubleshooting Guide.
Failure Mode Taxonomy
Before diving into causes, distinguish between these failure types:
| Symptom | What it means |
|---|---|
| Token returned, site responds "invalid-input-response" | Token structurally invalid or wrong type |
| Token returned, site responds "timeout-or-duplicate" | Token expired or already used |
| Token returned, site silently blocks | Score too low (v3) or IP/session flagged |
| Token returned, form submit returns error | Wrong field name or multi-factor rejection |
Cause 1 — Token Expired Before Submission
reCAPTCHA v2/v3 tokens expire after 2 minutes. hCaptcha tokens expire similarly. A common mistake: solving CAPTCHA at the start of a scraping cycle, then submitting the form much later.
Fix: Generate the token immediately before form submission. If there's any delay between solving and submitting, re-solve.
# Wrong: solve early, then do other work
token = solve_recaptcha(api_key, url, site_key)
time.sleep(30) # Token is now 30s old — may expire during submission
result = submit_form(token)
# Correct: solve immediately before submit
result = submit_form(solve_recaptcha(api_key, url, site_key))
Cause 2 — Wrong CAPTCHA Type / Method
Submitting an hCaptcha task as method=userrecaptcha generates an incompatible token. The token validates cryptographically but is rejected because it was issued for a different CAPTCHA system.
Fix: Match the API method to the CAPTCHA widget on the page.
| CAPTCHA | Correct method |
|---|---|
| reCAPTCHA v2 | method=userrecaptcha |
| reCAPTCHA v3 | method=userrecaptcha + version=v3 |
| hCaptcha | method=hcaptcha |
| Cloudflare Turnstile | method=turnstile |
| GeeTest v3 | method=geetest |
| GeeTest v4 | method=geetest4 |
| FunCaptcha | method=funcaptcha |
Cause 3 — Site Key Mismatch
Using a site key from the wrong environment (staging vs production, subdomain vs root domain) generates tokens that fail server-side validation even though they appear valid.
Fix: Extract the site key dynamically from the live page at submission time.
from playwright.sync_api import sync_playwright
def get_live_site_key(page_url: str) -> str:
with sync_playwright() as p:
browser = p.chromium.launch(headless=True)
page = browser.new_page()
page.goto(page_url, wait_until="networkidle")
key = page.get_attribute("[data-sitekey]", "data-sitekey")
browser.close()
return key
Never hardcode site keys in production automation code.
Cause 4 — reCAPTCHA v3 Score Too Low
v3 tokens carry a score. If the target site requires 0.7+ and your token scores 0.3, the request is silently blocked (no explicit CAPTCHA error, just a "suspicious activity" message or 403 response).
Fix: Request a minimum score from the solver.
payload = {
"key": api_key,
"method": "userrecaptcha",
"version": "v3",
"googlekey": site_key,
"pageurl": page_url,
"min_score": 0.7, # Request tokens scoring at least 0.7
"json": 1,
}
If you still get blocked at 0.7, the site may require 0.9. Some sites also check the action parameter — extract and pass the action name from the page's JS.
Cause 5 — IP Address Flagged
The CAPTCHA token is generated based on the page URL and site key, but the site also validates the origin IP during token verification. If your IP has poor reputation or is a datacenter IP known for scraping, the token gets rejected even when technically valid.
Fix: - Use residential or mobile proxies for the browser session - Route requests through the same IP for both page load and form submission - Avoid data center IPs for sites with strict bot protection
Cause 6 — Token Already Used
CAPTCHA tokens are single-use. If your code retries a form submission without re-solving the CAPTCHA, the second attempt uses an already-consumed token.
Fix: Always solve fresh on each form submission attempt.
for attempt in range(3):
token = solve_captcha(api_key, url, site_key) # Fresh token every attempt
response = submit_form(token)
if response.ok:
break
Cause 7 — Solver Worker Quality
On some solvers, a small percentage of workers deliver tokens that pass the API's internal check but fail on stricter sites. This manifests as intermittent failures (e.g., 85% success rate instead of 99%+).
Fix: Switch to a solver with higher per-solve quality guarantees. CaptchaAI uses a quality filter that rejects low-confidence solves before returning results.
Quick Diagnostic Checklist
- ✅ Verify token is generated immediately before submission
- ✅ Confirm the method matches the CAPTCHA type on the page
- ✅ Extract the site key dynamically, not hardcoded
- ✅ For v3 targets, check if
min_score=0.7is needed - ✅ Confirm your IP isn't flagged (test from a clean residential IP)
- ✅ Confirm each form submission uses a fresh token
- ✅ Check the site's bot protection layer (may be independent of CAPTCHA)
Related Guides
- CAPTCHA Solver Troubleshooting Guide — full overview
- reCAPTCHA Solver Token Invalid — v2/v3 specific token rejection patterns
- CAPTCHA Solver Timeout Errors — when solver never returns a result
Production Readiness Notes
Use CAPTCHA Solver Low Success Rate — Diagnosis and Fixes as a decision and implementation aid, not just as a one-time reference. The practical test for captcha solver low success rate is whether the same approach behaves reliably when traffic is messy: rotating sessions, expired tokens, changing widget parameters, intermittent solver delays, and target pages that refresh without warning. For Developer debugging CAPTCHA automation issues, the safest rollout is to start with a narrow fixture, record every submitted task, and compare the solver response with the browser state that finally submits the form. That makes failures explainable instead of mysterious, especially when a target alternates between visible challenges, invisible checks, and server-side verification.
Evaluation Criteria
A troubleshooting guide should change one variable at a time and record the before-and-after result; otherwise proxy, token, and page-state bugs blur together. For troubleshooting work, the most useful scorecard combines technical acceptance with operational cost. A low nominal price is not enough if retries double the real cost per accepted token, and a fast median solve time is not enough if p95 latency stalls the queue. Track these criteria before you standardize the workflow:
- The challenge subtype, sitekey, action, rqdata, blob, captchaId, or page URL used for each task.
- Median and p95 solve time, separated by provider and target domain.
- Accepted-token rate on the target page, not just successful API responses.
- Retry count, timeout count, zero-balance incidents, and invalid-parameter errors.
- The exact browser, proxy region, and user-agent that submitted the solved token.
Rollout Checklist
Before this guidance moves into a production job, build a small acceptance suite around the pages that matter most. Run it with a fixed browser profile, then repeat with the proxy and concurrency settings you expect in production. Keep the first release conservative: bounded polling, clear timeout handling, and a fallback path when the solver cannot return a usable answer. For troubleshooting, preserve the original request payload, solver response, page URL, sitekey, proxy, and browser fingerprint before changing multiple variables. That checklist keeps the article useful after the first copy-paste, because the integration is judged by end-to-end completion rather than by whether a code sample returned a string.
Monitoring Signals
Healthy CAPTCHA automation is observable. Log the task id, provider, challenge type, target host, queue time, solve time, final submit status, and normalized error code for every attempt. Review those logs in daily batches at first, then move to alerts once the baseline is stable. Sudden drops usually come from target-side changes: a new sitekey, a changed action name, a stricter hostname check, an added managed challenge, or a proxy pool that no longer matches the expected geography. When you can see those shifts quickly, provider switching becomes a controlled decision instead of a late-night rewrite.
Maintenance Cadence
Revisit the setup whenever the target UI changes, when the solver provider changes task names or pricing, or when benchmark data shows a sustained latency or solve-rate shift. Keep one known-good fixture for each CAPTCHA subtype and rerun it after dependency upgrades, browser updates, and proxy changes. If the article is used for vendor selection, repeat the same fixture across at least two providers before renewing a balance or migrating the whole pipeline. That habit keeps captcha solver low success rate work aligned with the real target behavior rather than with stale assumptions.