GeeTest v4 (released 2021, now widely deployed) is not just a UI refresh of v3 — the parameter model, the JavaScript API, and the supported challenge types all changed. For an automation engineer that means: solver methods are different, request payloads are different, and a v3 integration will fail silently against a v4-protected site.
This article is the short, accurate version of what changed and what you need to update.
For a working code tutorial, see How to Solve GeeTest in Python. For full background, see the GeeTest Guide.
High-level diff
| Aspect | GeeTest v3 | GeeTest v4 |
|---|---|---|
| Required params | challenge, gt, api_server |
captcha_id only |
| Challenge types | Slider, click | Slider, click, slide, icon, nine-grid, space-detection |
| Response format | 3 fields (geetest_challenge, geetest_validate, geetest_seccode) |
5 fields including lot_number, pass_token, gen_time |
| Widget JS | gt.js |
gt4.js |
| Mobile-native support | Limited | Full SDK (iOS / Android) |
| Solver method names | geetest |
geetest_v4 |
Parameter model
The biggest change is parameter discovery. v3 required you to hit a "register" endpoint on the protected site to get a fresh challenge token before each solve — that token was bound to a gt site identifier and an api_server host. The solver had to be passed all three.
# v3 — three params
payload = {
"method": "geetest",
"gt": gt_value,
"challenge": fresh_challenge, # must be re-fetched each solve
"api_server": "api-na.geetest.com",
"pageurl": page_url,
}
v4 collapsed this to a single captcha_id that is embedded in the page HTML and does not need a register call. The solver fetches its own challenge.
# v4 — one param
payload = {
"method": "geetest_v4",
"captcha_id": captcha_id,
"pageurl": page_url,
}
This is the single biggest source of "my solver returns OK but the site rejects the token" — using the v3 method against a v4-protected page or vice versa.
Response payload changes
v3 returned three fields that you injected back into the form:
{
"geetest_challenge": "...",
"geetest_validate": "...",
"geetest_seccode": "..."
}
v4 returns five:
{
"captcha_id": "...",
"lot_number": "...",
"pass_token": "...",
"gen_time": "1714754300",
"captcha_output": "..."
}
All five are required by the verify endpoint. Missing gen_time (because it looks like metadata) is a common bug — it is treated as a timestamp signature and the verify call returns invalid if dropped.
New challenge types
GeeTest v4 added three challenges that v3 did not have:
- Icon click — pick a sequence of icons from a panel
- Nine-grid (image) — select all images matching a rotated reference
- Space-detection slider — match a 3D-perspective gap
For solver providers, these required new ML models. As of the current CaptchaRank benchmark, the providers with full v4 coverage are:
| Solver | v3 | v4 slider | v4 icon | v4 nine-grid |
|---|---|---|---|---|
| CaptchaAI | ✅ | ✅ | ✅ | ✅ |
| 2Captcha | ✅ | ✅ | ✅ | partial |
| Anti-Captcha | ✅ | ✅ | ✅ | ✅ |
| CapSolver | ✅ | ✅ | ✅ | ✅ |
| NopeCHA | ✅ | ✅ | partial | partial |
For ranked picks see best GeeTest solver.
Detecting which version a site uses
import re
import requests
def detect_geetest_version(page_url: str) -> str:
html = requests.get(page_url, timeout=10).text
if re.search(r'/v4/gt4\.js|window\.initGeetest4', html):
return "v4"
if re.search(r'/static/js/gt\.js|window\.initGeetest\(', html):
return "v3"
return "unknown"
You can also infer it from the parameter set: if the page exposes a captcha_id and no gt, it is v4.
Migration tips
If you are porting a v3 integration to v4:
- Switch the solver method to
geetest_v4(or your provider's equivalent). - Remove the register call — v4 does not need it.
- Update the form field set — five fields, not three.
- Keep the
gen_timefield exactly as the solver returned it (it is signed). - Re-test the failure path — a v3 retry loop that re-fetched
challengebetween attempts becomes unnecessary noise on v4.
FAQ
Is GeeTest v4 harder to solve than v3? Marginally. v4 introduced more challenge types, but solver providers have caught up. The end-to-end success rate is comparable.
Can I use the same solver provider for both?
Yes — every major provider supports both methods (geetest and geetest_v4) on the same API key.
Why does my v4 token get rejected?
Most often: missing gen_time field on form submit, or wrong pageurl passed to the solver.
Does GeeTest still support v3? Yes. v3 is not deprecated; many sites still run it. New deployments default to v4.
Where can I see live GeeTest solver performance? On captcharank.com/solvers/geetest — speed and success rate refresh continuously.
Production Readiness Notes
Use GeeTest v3 vs v4 — What Actually Changed as a decision and implementation aid, not just as a one-time reference. The practical test for geetest v3 vs v4 is whether the same approach behaves reliably when traffic is messy: rotating sessions, expired tokens, changing widget parameters, intermittent solver delays, and target pages that refresh without warning. For Automation developer / scraping engineer, the safest rollout is to start with a narrow fixture, record every submitted task, and compare the solver response with the browser state that finally submits the form. That makes failures explainable instead of mysterious, especially when a target alternates between visible challenges, invisible checks, and server-side verification.
Evaluation Criteria
A developer guide should become a reusable integration module with typed configuration, bounded polling, structured errors, and a single place for API credentials. For GeeTest work, the most useful scorecard combines technical acceptance with operational cost. A low nominal price is not enough if retries double the real cost per accepted token, and a fast median solve time is not enough if p95 latency stalls the queue. Track these criteria before you standardize the workflow:
- The challenge subtype, sitekey, action, rqdata, blob, captchaId, or page URL used for each task.
- Median and p95 solve time, separated by provider and target domain.
- Accepted-token rate on the target page, not just successful API responses.
- Retry count, timeout count, zero-balance incidents, and invalid-parameter errors.
- The exact browser, proxy region, and user-agent that submitted the solved token.
Rollout Checklist
Before this guidance moves into a production job, build a small acceptance suite around the pages that matter most. Run it with a fixed browser profile, then repeat with the proxy and concurrency settings you expect in production. Keep the first release conservative: bounded polling, clear timeout handling, and a fallback path when the solver cannot return a usable answer. For GeeTest, refresh gt, challenge, and captchaId parameters for every attempt, because stale GeeTest values are the most common source of false failures. That checklist keeps the article useful after the first copy-paste, because the integration is judged by end-to-end completion rather than by whether a code sample returned a string.
Monitoring Signals
Healthy CAPTCHA automation is observable. Log the task id, provider, challenge type, target host, queue time, solve time, final submit status, and normalized error code for every attempt. Review those logs in daily batches at first, then move to alerts once the baseline is stable. Sudden drops usually come from target-side changes: a new sitekey, a changed action name, a stricter hostname check, an added managed challenge, or a proxy pool that no longer matches the expected geography. When you can see those shifts quickly, provider switching becomes a controlled decision instead of a late-night rewrite.
Maintenance Cadence
Revisit the setup whenever the target UI changes, when the solver provider changes task names or pricing, or when benchmark data shows a sustained latency or solve-rate shift. Keep one known-good fixture for each CAPTCHA subtype and rerun it after dependency upgrades, browser updates, and proxy changes. If the article is used for vendor selection, repeat the same fixture across at least two providers before renewing a balance or migrating the whole pipeline. That habit keeps geetest v3 vs v4 work aligned with the real target behavior rather than with stale assumptions.