Python is one of the go-to languages for web automation, scraping, testing, and AI data pipelines. All of these benefit from solid proxy integration. When you plug proxies into your Python stack correctly, you can distribute requests, reduce IP bans, and reach region-specific data while keeping your origin IP safer.
This tutorial walks through how to use proxies in Python with four core tools—requests, httpx, aiohttp, and Selenium. You’ll learn how to plug in datacenter and authenticated proxies, handle errors, rotate IPs, and tune performance for real-world workloads like scraping, SEO, QA, and analytics.
Before you write any code, you need to know what you are connecting to.
Key terms:
In Python, you mostly care about four pieces of information from your provider:
If you are unsure about formats or auth, see your provider’s documentation or a general reference like “Proxy URL Formats” and “Proxy Authentication Methods” in your guides section.
Most proxy providers expose connection details in one of these formats:
With username/password
http://username:password@proxy.example.com:8000
With IP allowlisting (no credentials)
http://proxy.example.com:8000
SOCKS5
socks5://username:password@proxy.example.com:1080
You will reuse these values across all libraries:
host, portusername, passwordscheme like http, https, socks5Once you have a working proxy line from your provider’s dashboard, you can plug it into the examples below.
requests is the most widely used HTTP client in Python. It handles basic proxy usage with a proxies dictionary.
import requests
proxies = {
"http": "http://user:pass@proxy.example.com:8000",
"https": "http://user:pass@proxy.example.com:8000",
}
resp = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(resp.json())
If your proxy uses IP allowlisting and does not require credentials:
proxies = {
"http": "http://proxy.example.com:8000",
"https": "http://proxy.example.com:8000",
}
For scraping and automation, you should use a Session with retries and timeouts to avoid hanging requests.
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
proxies = {
"http": "http://user:pass@proxy.example.com:8000",
"https": "http://user:pass@proxy.example.com:8000",
}
session = requests.Session()
retry_strategy = Retry(
total=5,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=["HEAD", "GET", "OPTIONS"],
backoff_factor=1.0, # 1s, 2s, 4s, ...
)
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("http://", adapter)
session.mount("https://", adapter)
resp = session.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(resp.json())
You can add custom headers, user-agents, and cookies like any other requests call; the proxy only changes the network route.
If your provider gives SOCKS proxies:
pip install "requests[socks]"
import requests
proxies = {
"http": "socks5://user:pass@proxy.example.com:1080",
"https": "socks5://user:pass@proxy.example.com:1080",
}
resp = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(resp.json())
SOCKS is useful when you need more flexible transport, but simple HTTP proxies are enough for most datacenter setups.
httpx is a modern HTTP client that supports both synchronous and asynchronous workflows.
pip install httpx
import httpx
proxy_url = "http://user:pass@proxy.example.com:8000"
with httpx.Client(proxies=proxy_url, timeout=10.0) as client:
resp = client.get("https://httpbin.org/ip")
print(resp.json())
You can also pass a dictionary to set different proxies per scheme:
proxies = {
"http://": "http://user:pass@proxy.example.com:8000",
"https://": "http://user:pass@proxy.example.com:8000",
}
with httpx.Client(proxies=proxies) as client:
...
Async mode is useful when you need to perform many concurrent requests efficiently.
import asyncio
import httpx
proxy_url = "http://user:pass@proxy.example.com:8000"
async def fetch_ip(client: httpx.AsyncClient) -> None:
resp = await client.get("https://httpbin.org/ip")
print(resp.json())
async def main():
async with httpx.AsyncClient(proxies=proxy_url, timeout=10.0) as client:
tasks = [fetch_ip(client) for _ in range(5)]
await asyncio.gather(*tasks)
asyncio.run(main())
pip install "httpx[socks]"
import httpx
import asyncio
async def main():
async with httpx.AsyncClient(
proxies="socks5://user:pass@proxy.example.com:1080",
timeout=10.0,
) as client:
resp = await client.get("https://httpbin.org/ip")
print(resp.json())
asyncio.run(main())
aiohttp is another popular async HTTP client, often used in custom scrapers.
pip install aiohttp
import aiohttp
import asyncio
async def fetch_ip(session: aiohttp.ClientSession) -> None:
async with session.get("https://httpbin.org/ip") as resp:
print(await resp.text())
async def main():
proxy_url = "http://user:pass@proxy.example.com:8000"
timeout = aiohttp.ClientTimeout(total=15)
async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.get("https://httpbin.org/ip", proxy=proxy_url) as resp:
print(await resp.text())
asyncio.run(main())
You can pass proxy per request, or design a helper that always includes your proxy configuration.
To run many requests concurrently:
import aiohttp
import asyncio
URLS = ["https://httpbin.org/ip"] * 20
async def fetch(url: str, session: aiohttp.ClientSession, proxy_url: str) -> None:
async with session.get(url, proxy=proxy_url) as resp:
print(await resp.text())
async def main():
proxy_url = "http://user:pass@proxy.example.com:8000"
timeout = aiohttp.ClientTimeout(total=15)
async with aiohttp.ClientSession(timeout=timeout) as session:
tasks = [fetch(url, session, proxy_url) for url in URLS]
await asyncio.gather(*tasks)
asyncio.run(main())
Always keep an eye on concurrency limits and acceptable use policies from both your proxy provider and target sites.
Selenium is useful when you need a real browser: JavaScript-heavy sites, complex login flows, or UI testing. You can combine Selenium with proxies for more realistic user simulations.
pip install selenium
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument(
"--proxy-server=http://user:pass@proxy.example.com:8000"
)
service = Service("/path/to/chromedriver")
driver = webdriver.Chrome(service=service, options=chrome_options)
driver.get("https://httpbin.org/ip")
print(driver.title)
driver.quit()
Some setups prefer specifying the proxy host/port and then authenticating:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--proxy-server=http://proxy.example.com:8000")
service = Service("/path/to/chromedriver")
driver = webdriver.Chrome(service=service, options=chrome_options)
driver.get("https://httpbin.org/ip")
# If your provider supports it, you can handle auth prompts or cookie-based auth here.
driver.quit()
You can also run Chrome in headless mode for faster automation:
chrome_options.add_argument("--headless=new")
Browser automation is heavier than simple HTTP requests, so keep concurrency modest and be conservative with crawl speeds.
For large-scale scraping or monitoring, a single IP is rarely enough. You have two main options:
import random
import time
import requests
PROXY_LIST = [
"http://user:pass@proxy1.example.com:8000",
"http://user:pass@proxy2.example.com:8000",
"http://user:pass@proxy3.example.com:8000",
]
def get_proxies():
proxy_url = random.choice(PROXY_LIST)
return {
"http": proxy_url,
"https": proxy_url,
}
def fetch(url: str):
proxies = get_proxies()
try:
resp = requests.get(url, proxies=proxies, timeout=10)
print(resp.status_code, resp.text[:80])
except Exception as exc:
print("Proxy failed:", proxies["http"], "|", exc)
for _ in range(10):
fetch("https://httpbin.org/ip")
time.sleep(2)
In more advanced setups:
If your provider offers rotating endpoints or “sticky sessions,” use those features instead of implementing low-level rotation yourself.
Before you scale, always verify that your proxies behave as expected.
import requests
proxies = {
"http": "http://user:pass@proxy.example.com:8000",
"https": "http://user:pass@proxy.example.com:8000",
}
resp = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=10)
print(resp.text)
Confirm that:
TEST_URL = "https://httpbin.org/status/200"
def proxy_ok(proxy_url: str) -> bool:
proxies = {"http": proxy_url, "https": proxy_url}
try:
r = requests.get(TEST_URL, proxies=proxies, timeout=8)
return r.status_code == 200
except Exception:
return False
for proxy_url in PROXY_LIST:
print(proxy_url, "OK" if proxy_ok(proxy_url) else "FAIL")
You can extend this pattern into a regular health check job to keep your pool clean.
| Error / Symptom | Likely Cause | What to check or fix |
|---|---|---|
ProxyError / ConnectionError |
Wrong host/port or dead proxy | Verify proxy hostname, port, and that the proxy is online. |
407 Proxy Authentication Required |
Invalid username/password or no auth configured | Double-check credentials; confirm IP allowlist vs user/pass mode. |
Frequent Timeout |
Slow proxy, heavy target site, or too much load | Increase timeout slightly, reduce concurrency, or switch to a better proxy IP. |
SSLError or CERTIFICATE_VERIFY_FAILED |
TLS issues, man-in-the-middle inspection, or test settings | Ensure HTTPS proxy support; for debugging you can disable verification temporarily. |
Target site returns many 403 or 429 |
Blocked or rate-limited by destination | Slow down requests, rotate IPs, adjust headers, and review acceptable use policies. |
| Works locally, fails through proxy | Geo-blocks or IP reputation problems | Check geolocation, ASN, and whether the target allows that proxy network. |
For production systems, log errors with enough detail to identify problematic IPs and patterns over time.
httpx, aiohttp) for many concurrent requests.Proxies are a network tool, not an excuse to ignore rules.
When using proxies in Python:
Well-designed proxy use keeps your projects sustainable and reduces the risk of disruption.
Usually no. Your Python code mostly cares about the proxy protocol and authentication format. Datacenter, residential, or mobile proxies are handled the same way at the HTTP layer; the differences are in pricing, trust level, and how targets treat the IP ranges.
requests or an async client with proxies?If you have light to moderate workloads, requests with sessions and timeouts is enough. For high-concurrency scraping or pipelines that run thousands of requests per minute, an async client such as httpx or aiohttp is more efficient.
It depends on your concurrency, targets, and tolerance for CAPTCHAs or bans. Some small projects work with a handful of static IPs. Larger jobs often use dozens or hundreds of IPs, or a rotating endpoint, so each proxy carries only a fraction of the total traffic.
Yes. You can use Selenium with Chrome or Firefox and configure a proxy for each browser instance. This is useful when you need JavaScript rendering or want to mimic real user sessions, but the cost and complexity are higher than simple HTTP clients.
It is better to avoid hard-coding secrets. Store proxy usernames and passwords in environment variables, configuration files outside version control, or a secrets manager. Read them at runtime and construct your proxy URLs dynamically.
When proxies are treated as a first-class part of your Python stack, you gain more control over where and how your traffic appears on the internet. By understanding connection formats, integrating proxies into requests, httpx, aiohttp, and Selenium, and layering in rotation, testing, and monitoring, you can build data pipelines and automation workflows that are both robust and respectful of the systems you touch.
For teams that rely on Python for scraping, SEO tracking, QA, or analytics, reliable datacenter proxies with clear pricing and strong uptime make day-to-day work much easier. ProxiesThatWork focuses on developer-friendly dedicated IPs that plug cleanly into the patterns you have seen in this tutorial, so you can spend more time on your code and less time chasing unstable networks.

Nigel is a technology journalist and privacy researcher. He combines hands-on experience with technical tools like proxies and VPNs with in-depth analysis to help businesses and individuals make informed decisions about secure internet practices.