Java still powers a huge amount of production infrastructure: APIs, crawlers, test harnesses, and internal tools. Many of these workloads need proxies to manage IP reputation, reach region-locked content, or keep sensitive infrastructure hidden behind an extra network layer.
Using proxies in Java lets you:
This guide walks through proxy integration in modern Java using three major HTTP clients:
java.net.http.HttpClient (Java 11+) You’ll see how to configure HTTP and HTTPS proxies, add authentication, handle timeouts and connection pools, rotate IPs safely, and test that everything works as expected.
Before we dive into code, it helps to name the main moving parts.
Proxy host / port
The server and port your Java app connects to instead of going directly to the destination (for example, proxy.example.com:8080).
HTTP vs SOCKS
Authentication
Static vs rotating
Most proxy providers document whether you should use HTTP or SOCKS, how to authenticate, and whether your connection goes through a static or rotating pool.
Typical Java use cases that benefit from proxies:
At scale, proxies also help you:
For these workloads, dedicated datacenter proxies are often a good fit: predictable, fast, and easier to integrate into Java HTTP clients.
HttpClientThe modern java.net.http.HttpClient (Java 11 and newer) has built-in proxy support.
import java.net.*;
import java.net.http.*;
public class HttpClientProxyExample {
public static void main(String[] args) throws Exception {
ProxySelector proxySelector = ProxySelector.of(
new InetSocketAddress("proxy.example.com", 8080)
);
HttpClient client = HttpClient.newBuilder()
.proxy(proxySelector)
.build();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://httpbin.org/ip"))
.GET()
.build();
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println(response.body());
}
}
If your provider uses IP allowlisting, this is often enough: no explicit credentials, just the proxy host and port.
HttpClient does not have a direct setProxyCredentials method. Instead, you typically:
Authorization header (when the proxy expects basic auth), or Authenticator.Example with basic proxy authentication using an Authenticator:
import java.net.*;
import java.net.http.*;
import java.util.Base64;
public class HttpClientProxyAuthExample {
public static void main(String[] args) throws Exception {
String proxyHost = "proxy.example.com";
int proxyPort = 8080;
String username = "user";
String password = "pass";
ProxySelector proxySelector = ProxySelector.of(new InetSocketAddress(proxyHost, proxyPort));
Authenticator authenticator = new Authenticator() {
@Override
protected PasswordAuthentication getPasswordAuthentication() {
if (getRequestorType() == RequestorType.PROXY &&
getRequestingHost().equalsIgnoreCase(proxyHost) &&
getRequestingPort() == proxyPort) {
return new PasswordAuthentication(
username,
password.toCharArray()
);
}
return null;
}
};
HttpClient client = HttpClient.newBuilder()
.proxy(proxySelector)
.authenticator(authenticator)
.build();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://httpbin.org/ip"))
.GET()
.build();
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println(response.body());
}
}
For production workloads, always configure timeouts and, if needed, redirects:
HttpClient client = HttpClient.newBuilder()
.proxy(proxySelector)
.connectTimeout(java.time.Duration.ofSeconds(10))
.followRedirects(HttpClient.Redirect.NORMAL)
.build();
You can also use HttpRequest.Builder#timeout for per-request timeouts:
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://httpbin.org/ip"))
.timeout(java.time.Duration.ofSeconds(15))
.GET()
.build();
OkHttp is widely used in modern Java and Android apps for HTTP traffic.
import okhttp3.*;
import java.net.InetSocketAddress;
import java.net.Proxy;
public class OkHttpProxyExample {
public static void main(String[] args) throws Exception {
Proxy proxy = new Proxy(
Proxy.Type.HTTP,
new InetSocketAddress("proxy.example.com", 8080)
);
OkHttpClient client = new OkHttpClient.Builder()
.proxy(proxy)
.build();
Request request = new Request.Builder()
.url("https://httpbin.org/ip")
.build();
try (Response response = client.newCall(request).execute()) {
System.out.println(response.body().string());
}
}
}
Use an Authenticator to handle 407 Proxy Authentication Required responses:
import okhttp3.*;
public class OkHttpProxyAuthExample {
public static void main(String[] args) throws Exception {
String username = "user";
String password = "pass";
String creds = Credentials.basic(username, password);
Proxy proxy = new Proxy(
Proxy.Type.HTTP,
new java.net.InetSocketAddress("proxy.example.com", 8080)
);
OkHttpClient client = new OkHttpClient.Builder()
.proxy(proxy)
.proxyAuthenticator((route, response) -> {
if (response.request().header("Proxy-Authorization") != null) {
return null; // Give up, already tried
}
return response.request().newBuilder()
.header("Proxy-Authorization", creds)
.build();
})
.build();
Request request = new Request.Builder()
.url("https://httpbin.org/ip")
.build();
try (Response response = client.newCall(request).execute()) {
System.out.println(response.body().string());
}
}
}
OkHttp has sensible defaults but you should tune timeouts and connection pool behavior for scraping and high concurrency:
OkHttpClient client = new OkHttpClient.Builder()
.proxy(proxy)
.connectTimeout(java.time.Duration.ofSeconds(10))
.readTimeout(java.time.Duration.ofSeconds(20))
.writeTimeout(java.time.Duration.ofSeconds(20))
.build();
Connection pooling is handled automatically. Reuse one OkHttpClient per proxy configuration instead of creating a new client per request.
Apache HttpClient is still common in many enterprise stacks and older services.
Below are examples using HttpClient 5.x, but the concepts are similar in 4.x.
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.client5.http.impl.routing.DefaultProxyRoutePlanner;
import org.apache.hc.core5.http.HttpHost;
public class ApacheHttpClientProxyExample {
public static void main(String[] args) throws Exception {
HttpHost proxy = new HttpHost("proxy.example.com", 8080);
DefaultProxyRoutePlanner routePlanner = new DefaultProxyRoutePlanner(proxy);
try (CloseableHttpClient client = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build()) {
HttpGet get = new HttpGet("https://httpbin.org/ip");
try (CloseableHttpResponse response = client.execute(get)) {
System.out.println(response.getCode());
System.out.println(new String(response.getEntity().getContent().readAllBytes()));
}
}
}
}
Use credentials providers and auth scopes:
import org.apache.hc.client5.http.auth.AuthScope;
import org.apache.hc.client5.http.auth.UsernamePasswordCredentials;
import org.apache.hc.client5.http.impl.auth.BasicCredentialsProvider;
public class ApacheHttpClientProxyAuthExample {
public static void main(String[] args) throws Exception {
HttpHost proxy = new HttpHost("proxy.example.com", 8080);
BasicCredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(proxy),
new UsernamePasswordCredentials("user", "pass".toCharArray())
);
DefaultProxyRoutePlanner routePlanner = new DefaultProxyRoutePlanner(proxy);
try (CloseableHttpClient client = HttpClients.custom()
.setRoutePlanner(routePlanner)
.setDefaultCredentialsProvider(credsProvider)
.build()) {
HttpGet get = new HttpGet("https://httpbin.org/ip");
try (CloseableHttpResponse response = client.execute(get)) {
System.out.println(response.getCode());
System.out.println(new String(response.getEntity().getContent().readAllBytes()));
}
}
}
}
You can configure timeouts on the client or per request:
import org.apache.hc.client5.http.config.RequestConfig;
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(10_000)
.setResponseTimeout(20_000)
.build();
try (CloseableHttpClient client = HttpClients.custom()
.setRoutePlanner(routePlanner)
.setDefaultRequestConfig(requestConfig)
.build()) {
// ...
}
Apache HttpClient also has rich connection pool controls; for high volume workloads, tune max total connections and per-route limits.
For heavier scraping or monitoring workloads, you rarely rely on a single proxy IP. Instead, you either:
import okhttp3.*;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.util.List;
import java.util.Random;
public class OkHttpProxyPool {
private static final List<Proxy> PROXIES = List.of(
new Proxy(Proxy.Type.HTTP, new InetSocketAddress("proxy1.example.com", 8080)),
new Proxy(Proxy.Type.HTTP, new InetSocketAddress("proxy2.example.com", 8080)),
new Proxy(Proxy.Type.HTTP, new InetSocketAddress("proxy3.example.com", 8080))
);
private static final Random RNG = new Random();
public static OkHttpClient clientWithRandomProxy() {
Proxy proxy = PROXIES.get(RNG.nextInt(PROXIES.size()));
return new OkHttpClient.Builder()
.proxy(proxy)
.build();
}
public static void main(String[] args) throws Exception {
OkHttpClient client = clientWithRandomProxy();
Request request = new Request.Builder()
.url("https://httpbin.org/ip")
.build();
try (Response response = client.newCall(request).execute()) {
System.out.println(response.body().string());
}
}
}
In a real system you would usually:
If your provider offers a rotating gateway, your Java code may only need a single proxy configuration; the rotation happens on the provider side.
Always test proxies before plugging them into a production crawler or automation task.
Use any client to call an IP echo endpoint such as https://httpbin.org/ip or an equivalent:
HttpClient client = HttpClient.newBuilder()
.proxy(ProxySelector.of(new InetSocketAddress("proxy.example.com", 8080)))
.build();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://httpbin.org/ip"))
.GET()
.build();
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println(response.body());
If the JSON output shows a different IP than your own server, the proxy is active.
Logging response codes, latency, and exceptions per proxy makes it easy to spot noisy or failing IPs.
| Error / Symptom | Likely Cause | What to Check / Fix |
|---|---|---|
407 Proxy Authentication Required |
Missing or wrong credentials | Verify username/password, or IP allowlisting status |
java.net.ConnectException: Connection refused |
Wrong host/port, proxy offline | Check endpoint, network ACLs, and provider status |
java.net.SocketTimeoutException |
Dead proxy or slow network | Increase timeout slightly or switch proxy |
| Lots of 403/429 from target sites | IP reputation issues, aggressive concurrency | Slow down, rotate proxies, or use cleaner IP ranges |
| SSL / TLS handshake errors | HTTP proxy misuse, outdated TLS | Use HTTPS proxies, update JVM trust store if needed |
| Random hangs without response | No timeouts or connection leaks | Always set connect/read timeouts and close responses |
In all three libraries, make sure responses and input streams are closed to avoid leaking connections.
Even for technical teams, proxy usage is not just an engineering problem. Always consider:
Terms of service
Confirm that your intended scraping, monitoring, or testing is allowed by the sites involved.
Robots and rate limits
Even when data is public, respect robots directives when applicable and avoid abusive request patterns.
Data protection
If you touch user data or personal information, ensure you have a lawful basis and appropriate safeguards.
Internal policies
Coordinate with security and legal teams so proxy usage does not conflict with company policies.
Logging and auditing
Keep sufficient logs for debugging and compliance, but avoid storing sensitive payloads longer than necessary.
No. Most applications standardize on one primary HTTP client. Choosing between HttpClient, OkHttp, and Apache HttpClient depends on your existing stack, feature needs, and team familiarity. Many modern services use the built-in HttpClient or OkHttp and only bring in Apache HttpClient for legacy reasons.
For typical web scraping, API calls, and browser automation, HTTP/HTTPS proxies are sufficient and easier to configure. SOCKS proxies are more flexible at the TCP level but require additional libraries and are usually only needed when your provider specifically exposes SOCKS or you have non-HTTP protocols to tunnel.
It depends on concurrency, target strictness, and your error tolerance. A small batch job may work with a handful of IPs. Continuous high-volume crawlers often require dozens or hundreds so each IP only carries a modest amount of traffic. Start with a small pool, monitor block rates and latency, then scale based on real-world results.
Free proxy lists are almost never appropriate for production Java workloads. They are usually slow, unstable, and controlled by unknown operators who may inspect or modify traffic. For serious automation and data collection, rely on reputable paid providers with clear documentation, support, and clean datacenter ranges.
Abstract proxy configuration behind a small internal API: a factory for HTTP clients, a configuration object for hosts and credentials, and a simple rotation strategy. This keeps proxied and non-proxied calls consistent, makes it easy to swap providers, and centralizes logging, metrics, and error handling.
Proxies are no longer a niche concern reserved for custom scrapers. They are a core part of how modern Java systems access the web—whether you are building analytics tools, SEO monitoring, QA harnesses, or internal research pipelines.
By understanding how to configure proxies in HttpClient, OkHttp, and Apache HttpClient, and by layering in timeouts, connection pooling, rotation, and testing, you can build Java services that are both resilient and respectful of the sites they touch.
Teams that standardize on clean, dedicated datacenter proxies and good Java integration patterns find it much easier to scale workloads, troubleshoot issues, and stay compliant. When you design or upgrade a proxy layer for your Java services, treating proxies as a first-class part of the architecture helps your code focus on data and business logic rather than network headaches.

Nigel is a technology journalist and privacy researcher. He combines hands-on experience with technical tools like proxies and VPNs with in-depth analysis to help businesses and individuals make informed decisions about secure internet practices.