is yum's https proxy support broken? or am I doing this wrong?
in /etc/yum.conf , I've added this line
proxy=http://160.0.234.1:8080
and when trying to yum from my https repo, I got "X-Squid-Error: ERR_UNSUP_REQ 0" from the proxy..
[kagesenshi@Hikari ~]$ sudo yum install python-twisted Loading "installonlyn" plugin Setting up Install Process Parsing package install arguments https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml: [Errno 14] HTTP Error 400: Server: squid/2.5.STABLE11 Mime-Version: 1.0 Date: Tue, 24 Jul 2007 15:17:50 GMT Content-Type: text/html Content-Length: 1137 Expires: Tue, 24 Jul 2007 15:17:50 GMT X-Squid-Error: ERR_UNSUP_REQ 0 X-Cache: MISS from Network-Box Proxy-Connection: close
My yum & urlgrabber version: yum-3.2.1-1.fc7 python-urlgrabber-2.9.9-5.fc7
anybody can confirm whether this is a bug or not??
is yum's https proxy support broken? or am I doing this wrong?
in /etc/yum.conf , I've added this line
proxy=http://160.0.234.1:8080
You need to put a slash at the end.
Try: proxy=http://160.0.234.1:8080/
Otherwise you might like to consult: http://debarshiray.multiply.com/journal/item/30
Good luck, Debarshi
On 7/27/07, Debarshi 'Rishi' Ray debarshi.ray@gmail.com wrote:
is yum's https proxy support broken? or am I doing this wrong?
in /etc/yum.conf , I've added this line
proxy=http://160.0.234.1:8080
You need to put a slash at the end.
Try: proxy=http://160.0.234.1:8080/
Otherwise you might like to consult: http://debarshiray.multiply.com/journal/item/30
Good luck, Debarshi
its still not working
a little test with a python script using urlgrabber yields the same error .. a bug in urlgrabber? yum uses urlgrabber to grab downloads right?
import urlgrabber import os os.environ['https_proxy'] = 'http://160.0.234.1:8080/' urlgrabber.urlgrab('https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml')
[kagesenshi@Hikari tmp]$ python testurlgrab.py Traceback (most recent call last): File "testurlgrab.py", line 8, in <module> urlgrabber.urlgrab('https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml') File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 597, in urlgrab return default_grabber.urlgrab(url, filename, **kwargs) File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 927, in urlgrab return self._retry(opts, retryfunc, url, filename) File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 845, in _retry r = apply(func, (opts,) + args, {}) File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 913, in retryfunc fo = URLGrabberFileObject(url, filename, opts) File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 1001, in __init__ self._do_open() File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 1068, in _do_open fo, hdr = self._make_request(req, opener) File "/usr/lib/python2.5/site-packages/urlgrabber/grabber.py", line 1178, in _make_request raise new_e urlgrabber.grabber.URLGrabError: [Errno 14] HTTP Error 400: Server: squid/2.5.STABLE11 Mime-Version: 1.0 Date: Thu, 26 Jul 2007 17:11:35 GMT Content-Type: text/html Content-Length: 1137 Expires: Thu, 26 Jul 2007 17:11:35 GMT X-Squid-Error: ERR_UNSUP_REQ 0 X-Cache: MISS from Network-Box Proxy-Connection: close
-- GPG key ID: 63D4A5A7 Key server: pgp.mit.edu
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
On Thursday 26 July 2007 19:13:39 Hikaru Amano wrote:
import urlgrabber import os os.environ['https_proxy'] = 'http://160.0.234.1:8080/' urlgrabber.urlgrab('https://161.0.2.218/pub/mirror/repos/fedora_i686/repoda ta/repomd.xml')
With another proxy and another https URL above works for me.
Does this work for you: curl -x http://160.0.234.1:8080/ https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml
Or this: curl -p http://160.0.234.1:8080/ https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml
Regards, Till
On 7/27/07, Till Maas opensource@till.name wrote:
On Thursday 26 July 2007 19:13:39 Hikaru Amano wrote:
import urlgrabber import os os.environ['https_proxy'] = 'http://160.0.234.1:8080/' urlgrabber.urlgrab('https://161.0.2.218/pub/mirror/repos/fedora_i686/repoda ta/repomd.xml')
With another proxy and another https URL above works for me.
Does this work for you: curl -x http://160.0.234.1:8080/ https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml
Or this: curl -p http://160.0.234.1:8080/ https://161.0.2.218/pub/mirror/repos/fedora_i686/repodata/repomd.xml
I forgot to mention, all other applications (wget, curl, firefox, elinks, etc) are able to connect and retrieve the file through the proxy .. but not python-urlgrabber and yum ..
On 27/07/07, Hikaru Amano kagesenshi.87@gmail.com wrote:
I forgot to mention, all other applications (wget, curl, firefox, elinks, etc) are able to connect and retrieve the file through the proxy .. but not python-urlgrabber and yum ..
Squid simply won't do it that way (GET with an https: URL). The other programs are presumably doing a CONNECT request, which Squid does support.
On 7/27/07, Ralf Ertzinger fedora@camperquake.de wrote:
Hi.
On Fri, 27 Jul 2007 12:46:24 +0100, Bill Crawford wrote:
Squid simply won't do it that way (GET with an https: URL). The other programs are presumably doing a CONNECT request, which Squid does support.
are you saying urlgrabber uses GET for HTTPS?? .. wtf
HTTPS must use CONNECT, GET is plain broken for proxyconnects.
Agreed .. afaik, HTTPS must use CONNECT and not GET .. why does urlgrabber say it can do https but it doesnt do CONNECT?
So, is this a bug or a (weird) feature? ???
On Sat, 2007-07-28 at 15:00 +0800, Hikaru Amano wrote:
On 7/27/07, Ralf Ertzinger fedora@camperquake.de wrote:
Hi.
On Fri, 27 Jul 2007 12:46:24 +0100, Bill Crawford wrote:
Squid simply won't do it that way (GET with an https: URL). The other programs are presumably doing a CONNECT request, which Squid does support.
are you saying urlgrabber uses GET for HTTPS?? .. wtf
HTTPS must use CONNECT, GET is plain broken for proxyconnects.
Agreed .. afaik, HTTPS must use CONNECT and not GET .. why does urlgrabber say it can do https but it doesnt do CONNECT?
So, is this a bug or a (weird) feature? ???
I think, though I'm not 100% positive, that all of the url handling at that level is handled by urllib underneath urlgrabber. I've emailed the urlgrabber author to ask about this.
-sv