Ruby powers a lot of automation tooling, monitoring scripts, and backend services for SEO, e-commerce, and analytics. Adding proxies to these Ruby workflows lets you spread requests across many IPs, reach region-locked content, and keep your origin servers out of the line of fire.
This guide walks through how to integrate proxies in Ruby using Net::HTTP, HTTParty, and Mechanize. You’ll see real code examples for datacenter and authenticated proxies, plus advice on rotation, testing, error handling, and best practices.
Before diving into code, keep a few basics in mind:
HTTP(S) proxies
Most Ruby libraries support standard HTTP proxies, including authenticated endpoints of the form:http://username:password@host:port
SOCKS proxies
Possible via additional gems (for example, socksify), but most scraping and automation stacks stick with HTTP/HTTPS datacenter proxies.
Authentication
Two typical approaches:
Where proxies help
If you are using a pool of cheap datacenter proxies, always validate a few endpoints before wiring them into a large Ruby job.
Net::HTTP is Ruby’s standard HTTP client and sits underneath many other libraries.
require 'net/http'
require 'uri'
uri = URI('https://httpbin.org/ip')
proxy_host = 'proxyserver.example'
proxy_port = 8080
Net::HTTP::Proxy(proxy_host, proxy_port).start(uri.host, uri.port, use_ssl: uri.scheme == 'https') do |http|
request = Net::HTTP::Get.new(uri)
response = http.request(request)
puts response.body
end
require 'net/http'
require 'uri'
uri = URI('https://httpbin.org/ip')
proxy_host = 'proxyserver.example'
proxy_port = 8080
proxy_user = 'user'
proxy_pass = 'pass'
Net::HTTP::Proxy(proxy_host, proxy_port, proxy_user, proxy_pass).start(
uri.host,
uri.port,
use_ssl: uri.scheme == 'https'
) do |http|
request = Net::HTTP::Get.new(uri)
response = http.request(request)
puts response.body
end
require 'net/http'
require 'uri'
uri = URI('https://httpbin.org/ip')
proxy_host = 'proxyserver.example'
proxy_port = 8080
def fetch_with_retries(uri, proxy_host, proxy_port, attempts: 3)
tries = 0
begin
Net::HTTP::Proxy(proxy_host, proxy_port).start(
uri.host,
uri.port,
open_timeout: 5,
read_timeout: 10,
use_ssl: uri.scheme == 'https'
) do |http|
req = Net::HTTP::Get.new(uri)
res = http.request(req)
return res
end
rescue StandardError => e
tries += 1
warn "Request failed (#{e.class}): #{e.message} (attempt #{tries}/#{attempts})"
retry if tries < attempts
raise
end
end
response = fetch_with_retries(uri, proxy_host, proxy_port)
puts response.body
Use short but reasonable timeouts and log failures so you can identify bad IPs in your pool.
HTTParty is a popular wrapper around Net::HTTP that simplifies many tasks.
Add it to your project:
gem install httparty
require 'httparty'
response = HTTParty.get(
'https://httpbin.org/ip',
http_proxyaddr: 'proxyserver.example',
http_proxyport: 8080,
http_proxyuser: 'user',
http_proxypass: 'pass',
timeout: 10
)
puts response.body
For APIs or crawlers, you can centralize proxy configuration in a client class:
require 'httparty'
class ProxyClient
include HTTParty
base_uri 'https://httpbin.org'
default_options.update(
http_proxyaddr: 'proxyserver.example',
http_proxyport: 8080,
http_proxyuser: 'user',
http_proxypass: 'pass',
timeout: 10,
headers: {
'User-Agent' => 'Ruby-HTTParty-Client'
}
)
end
response = ProxyClient.get('/ip')
puts response.body
This pattern keeps your proxy configuration in one place and makes it easy to swap endpoints or credentials later.
Mechanize is a higher-level library that behaves more like a mini-browser. It manages cookies, sessions, redirects, and form submissions, which is useful for multi-step workflows.
Install the gem:
gem install mechanize
require 'mechanize'
agent = Mechanize.new
agent.set_proxy('proxyserver.example', 8080, 'user', 'pass')
page = agent.get('https://httpbin.org/ip')
puts page.body
require 'mechanize'
agent = Mechanize.new
agent.set_proxy('proxyserver.example', 8080, 'user', 'pass')
agent.user_agent_alias = 'Mac Safari' # or another profile
login_page = agent.get('https://example.com/login')
form = login_page.forms.first
form['username'] = 'my_user'
form['password'] = 'my_pass'
dashboard = form.submit
puts "Logged in as: #{dashboard.title}"
# Subsequent requests reuse cookies and the same proxy
protected_page = agent.get('https://example.com/account')
puts protected_page.uri
Mechanize is an excellent fit when you need cookies, form handling, and link traversal on top of a cheap datacenter proxy layer.
If you have multiple proxy endpoints, you can rotate them in Ruby to reduce the load on each IP and lower the risk of blocks.
require 'httparty'
PROXIES = [
{ addr: 'proxy1.example', port: 8080, user: 'user1', pass: 'pass1' },
{ addr: 'proxy2.example', port: 8080, user: 'user2', pass: 'pass2' },
{ addr: 'proxy3.example', port: 8080, user: 'user3', pass: 'pass3' }
].freeze
def get_with_rotating_proxy(url)
proxy = PROXIES.sample
HTTParty.get(
url,
http_proxyaddr: proxy[:addr],
http_proxyport: proxy[:port],
http_proxyuser: proxy[:user],
http_proxypass: proxy[:pass],
timeout: 10
)
rescue StandardError => e
warn "Proxy failed (#{proxy[:addr]}): #{e.message}"
nil
end
10.times do
res = get_with_rotating_proxy('https://httpbin.org/ip')
puts res&.body
end
For higher scale, you can:
Before you send production traffic through any proxy, validate that:
require 'net/http'
require 'uri'
require 'json'
def check_proxy(proxy_host, proxy_port, user: nil, pass: nil)
uri = URI('https://httpbin.org/ip')
http_class = Net::HTTP::Proxy(proxy_host, proxy_port, user, pass)
http_class.start(uri.host, uri.port, use_ssl: true, open_timeout: 5, read_timeout: 10) do |http|
res = http.get(uri.request_uri)
data = JSON.parse(res.body)
puts "Proxy #{proxy_host}:#{proxy_port} origin => #{data['origin']}"
end
rescue StandardError => e
warn "Proxy check failed for #{proxy_host}: #{e.class} - #{e.message}"
end
check_proxy('proxyserver.example', 8080, user: 'user', pass: 'pass')
If the origin in the response is different from your own server, the proxy is active.
You can extend this pattern to:
Here are some typical issues you may see when using proxies in Ruby:
| Error / symptom | Likely cause | How to fix |
|---|---|---|
Proxy Authentication Required or HTTP 407 |
Bad username/password or IP not whitelisted | Verify credentials; confirm your server IP is authorized |
Connection refused or Errno::ECONNREFUSED |
Wrong host/port or proxy is offline | Check the endpoint; try from the command line with curl |
execution expired or timeout errors |
Slow proxy, network congestion, or blocked IP | Increase timeout slightly; rotate to another proxy |
| SSL certificate or handshake errors | HTTP proxy used for HTTPS or TLS mismatch | Use correct protocol; ensure the proxy supports HTTPS |
| High CAPTCHA or 403/429 rate | IPs overused or target site is strict | Slow down, randomize timing, rotate proxies more aggressively |
| Inconsistent geo or ASN | Mixed or mis-labeled proxy pool | Confirm with your provider; filter proxies per region |
Always log errors with enough detail (proxy endpoint, URL, stack trace) so you can identify patterns rather than guessing.
Even when you are using cheap proxies, the same rules apply:
Treat proxies as one part of a responsible data-access strategy, not a way to ignore policies.
Net::HTTP is enough for basic workloads and gives you fine-grained control. HTTParty improves ergonomics for APIs and JSON, while Mechanize is better for session-based or multi-step flows with forms and cookies. Many teams use Net::HTTP plus HTTParty for most tasks and reserve Mechanize for workflows that look more like a browser.
Ruby can use SOCKS5 proxies, but you usually need extra gems or system-level tunneling. Most tutorials and providers focus on HTTP and HTTPS proxies, which integrate directly with Net::HTTP, HTTParty, and Mechanize. If you truly need SOCKS5, confirm that your environment and provider support it before you commit.
There is always a small latency penalty when you route traffic through another network hop. With solid datacenter proxies and realistic timeouts, the impact is usually modest. For large scraping or monitoring workflows, you can offset the extra latency by running more requests in parallel and using efficient libraries.
It depends on your concurrency, the strictness of target sites, and acceptable error rates. Small tools may be fine with a few IPs, while heavier crawlers often need dozens or hundreds of proxies to keep per-IP volume low. Start with a small pool, monitor success rates and error codes, and then grow your allocation based on actual results.
Free proxy lists are rarely safe for serious work. They are often unstable, overused, or operated by unknown entities who may inspect or alter traffic. For anything beyond experiments, use reputable paid providers with clear documentation, support, and predictable IP quality.
Ruby’s standard library and ecosystem make it straightforward to layer proxies into Net::HTTP, HTTParty, and Mechanize. Once you understand how to configure endpoints, pass authentication, and rotate through a pool of IPs, you can scale crawlers, monitors, and test suites without exposing your own infrastructure.
For teams that rely on Ruby for automation and data collection, the most important step is discipline rather than complexity: validate each proxy, track performance, adjust rotation, and keep your usage compliant with the rules of the sites you touch. With that foundation in place, swapping in better or cheaper proxy providers becomes a configuration change, not a rewrite.
If you need stable, developer-friendly dedicated datacenter proxies for Ruby projects, ProxiesThatWork offers plans that plug directly into the patterns shown here. You can start small, wire the endpoints into your Ruby clients, and scale up as your scraping, monitoring, or analytics workloads grow.

Avery is a data engineer and web scraping strategist who focuses on building scalable, efficient, and secure web scraping solutions. She has extensive experience with proxy rotation, anti-bot techniques, and API integrations for data-driven projects.