Kraker Local Proxy Server -- Instruction Manual


Instructions on how to use the Socks5 Tunnel Proxy Server (no changes for v4d or v4e)

Addendum (November 9, 2022): Version 4e is an important update with many policy revisions and some new features. Most of the changes will be noted as pertaining to the new version but some smaller changes may not be so noted. Read the documentation carefully.

A crash logger has been added. A crash report is printed on the console when the server crashes but this won't be visible if the console closes (default behaviour on Windows and maybe Linux). See the file "_crashlog.txt" for the most recent crash report.

Node.js version 18 has an issue with "localhost" resolving to IPv6 instead of IPv4. This presents a problem for many applications (including the Tor server which only responds to IPv4). Perhaps the dev team will reverse course on this but whatever. The problem is solved by calling the system DNS directly when needed instead of letting Node.js do it on its own (the "localhost" domain is hardwired to "127.0.0.1"). As for the obvious question: will Kraker ever support IPv6? Short answer: not for some time because IPv6 support is still spotty everywhere.

The Kraker Local Proxy Server is compatible with Node.js versions 10 to 18.


Addendum (August 1, 2022): This manual has been updated with the full specifications for using the Kraker Local Proxy Server either in your Javascript programs or in your web browser's url bar. With the exception of the following section on the security model, these instructions and specifications are not things that you need to know in order to use Alleycat Player.


File access permissions - the Kraker security model

The proxy server implements a security model in order to prevent unauthorized access to files on the local drive. Alleycat Player is normally restricted by the web browser from freely accessing the file system but this limit can be bypassed via the proxy server. These basic rules apply:

1) All files in the Kraker home directory are accessible for reading.
2) A new file can be opened for writing but it is not permitted to modify an existing file.
3) No access at all is possible outside of the Kraker home directory.

It is sometimes desirable to read files located elsewhere on the local drive (to play a video, for example). Also, Alleycat Player has a new feature for saving m3u8 videos and this requires the ability to append to an existing file in order to concatenate the segments. This is not allowed under the basic rules. A special file called "_aliases.txt" is employed to control file access. If you intend to save m3u8 videos, you need to learn this. File path names are not permitted. You must use an alias. The syntax for an entry in the aliases file is simple:

+alias, +c:/myfolder/myvideo.mp4;

White space is irrelevant and you may include whatever comments you like in the file. The proxy server looks for a given name in between a plus sign and a comma. If the name is found, then the proxy will look for a path name in between a plus sign and a semicolon. To enable a file for writing, put a question mark at the end of the alias (in front of the comma). The alias may not contain a colon or a slash or a backslash.

A file called "_aliases_sample.txt" is provided with the Alleycat installation. This file should be renamed as "_aliases.txt". It contains 18 preset aliases for the purpose of saving m3u8 videos. Each video uses two aliases, one for an audio track (where applicable) and one for the video track.

As an additional security precaution, "_aliases.txt" is blocked for both reading and writing. File reading is implemented via the GET method in a standard HTTP request. The file name is simply the plus sign and the alias. Files may be created or appended via the PUT method. No other mechanism has been provided for accessing the file system. This is not intended as a replacement for standard file access. Examples of a fetch request in Javascript:

fetch ("http://localhost:8080/+myfile", {method: 'GET'});
fetch ("http://localhost:8080/+myfile", {method: 'PUT', body: mydata});
fetch ("http://localhost:8080/++myfile", {method: 'PUT', body: mydata});

The first example can be simulated by just entering the URL in the web browser. There is no equivalence to the PUT method. The third example uses an extra plus sign to indicate that the body of the request should be appended to an existing file.

New for version 4e:

Directories are now supported. The only difference is that a directory path must end with a slash. The GET method will return a list of files (but not directories) if the path resolves to a directory. Two notes on the Kraker home directory: 1) the files are not listable and 2) directories contained within are accessible and the files are listable. When implementing directory paths (especially if they are write-enabled), you should consider the alias like a password. The only thing preventing a potential attack is that the alias is a secret.


List of Alleycat Player files:

kraker.js Proxy Server Version 3b
alleycat-player.htm Alleycat Player Demo Version
crypto.js cryptography module https://cdnjs.cloudflare.com/ajax/libs/crypto-js/3.1.9-1/crypto-js.min.js
hls_player.js HLS/m3u8 player module https://cdn.jsdelivr.net/npm/hls.js@0.12.0/dist/hls.min.js
dash_player.js DASH/mpd player module https://reference.dashif.org/dash.js/v3.0.2/dist/dash.all.min.js
poster.jpg background image

List of supplementary files:

_aliases_sample.txt file access permissions (rename as _aliases.txt)
_blank_dash_mpd.txt template for Youtube DASH
_blank_live_mpd.txt template for Youtube DASH
_https_crt.pem HTTPS certificate
_https_key.pem HTTPS private key


Purpose of the Kraker Local Proxy Server

The primary function of the proxy server, as explained in the installation instructions, is to bypass the web browser restrictions on Cross-Origin Resource Sharing (CORS). The secondary function is to manipulate HTTP headers, both outgoing and incoming. No other content is inspected or modified (with the exception of m3u8 files as required by Alleycat Player). There are many reasons for modifying HTTP headers. Some websites require a certain header to be set (for example, "x-requested-with"). Others may need a cookie or a certain user agent. You can try the following URL in your web browser:

http://localhost:8080/accept=application/dns-json|!content-type=application/json|*https://eth.link/dns-query?type=A&name=google.com

This is a "DNS over HTTPS" request with the return format set to JSON. Let's break this down. First, you need the name of the proxy server ("http://localhost:8080") followed by a slash and the parameters for setting the required headers. The first parameter is the name of the header ("accept") and the mime type ("application/dns-json"). This is the outgoing request header which informs the destination server that you expect the response in the JSON format. Without this header, the server will return an error or not respond so it is important. Each header is separated by a vertical bar. The second header is to be returned from the server. This one is not critical but it is needed if you want to see a nice JSON structure instead of plain text. The exclamation mark indicates that this is an incoming response header. It is called "content-type" and the setting is the mime type "application/json". This tells the browser how to display the response. The headers end with "|*" and the destination URL follows.

A DoH server request will not work from a web browser url bar without the assistance of the Local Proxy Server. The expected audience for such a request is the browser itself and not the end user (though some DoH servers might be more helpful). This is just one example of how the proxy server can be used to break past an artificial barrier.


Detailed structure of a proxy request

Multiple headers or internal commands must be separated by a vertical bar ( | ) and the final header must end with a vertical bar and an asterisk ( |* ). The URL of the destination server follows. If no headers are specified than these special characters should not appear. If a header value needs to be URI decoded (due to the presence of special characters such as spaces) then prepend the value with an exclamation mark.

New for version 4e: You may use a tilde in place of the vertical bar. This enhancement is due to Chrome-based browsers which inconveniently replace the vertical bar with %7C. (Hint: you really should be using Firefox for development work.)

A special note about case: do not use uppercase characters in header names else unexpected behaviour will occur. The header "Accept" is not the same as "accept". This is a limitation in the way headers are handled in Node.js which, in turn, is a limitation in the way Javascript handles object attribute names (the names are case-sensitive). Besides, the HTTP standards state that case should be ignored when processing header names. The web browser employs mixed case in header names as a stylistic convention and not because it is required.

The best way to familiarize yourself with the URL syntax is to watch the server console while playing some videos in Alleycat Player. You will see each request as it is sent to the destination server as the app fetches one or more files in its search for a video link.

You may notice that Alleycat Player sometimes inserts a double-comma in the "Origin/Referer" field. This is a special syntax for m3u8 files which resolves a problem with relative URLs. This type of URL lacks the domain name which is the name of the server from where the file was retrieved. This is an issue when passing the video through Kraker because the HLS/m3u8 playback module will submit an incorrect URL to the proxy server. In order to fix this, Kraker must load the m3u8 and correct the affected links.

Expert tip: IP address test

To test whether a particular website is available over an alternate IP address, use this in the browser url bar:

view-source:http://localhost:8080/host=whatever.com|*https://1.2.3.4


Shadow port management

Please refer to the section "Advanced hacking: shadow ports and website mimicry"

The command syntax is as follows (using "shadow" as an alias for "localhost:8080"):

You can create shadow ports in your Kraker settings file. For example:
[? search SHD:~https://www.startpage.com] [? mymusic SHD:+c:/music]

To play a music file, you could just type http://mymusic/song.mp3 in your browser url bar.
Alternatively: http://localhost:8080/$mymusic$song.mp3 (which bypasses the Socks5 proxy).

Use "SHD" by itself to delete a shadow port. Use named groups with the "activate" command:
[?test test1 SHD:+c:/music] [?test test2 SHD:+c:/photos] [?done test1:80 SHD] [?done test2:80 SHD]

Multiple domains may be specified (separated by vertical bar). The prefixes ($+~) may be applied separately:
[? example|$~example.com SHD:https://example.com] [? mymusic|$mymusic SHD:+c://music]

The string "$$$" may be used in place of the server name:
[? www.bitchute.com|www.google.com SHD:$~https://$$$]


Shadow port forking

Forking serves three purposes:

Borrowing a shadow porthttp://localhost:8080/$mymusic$song.mp3
Loading a local filehttps://www.bitchute.com/pathname?$password$@test.html
Stealing cookieshttps://www.bitchute.com/pathname?$password$@

The first example was covered in the previous section. This may be used to access local files or a website. It can be used in any application (not just in a web browser) because the Socks5 proxy is not required. The domain name in between the dollar signs must be either dotless or prepended with a dot.

The origin must be dotless (like "localhost"), prepended with a dot or be a localhost shadow. A localhost shadow is a shadow port with an empty parameter string (meaning that it is a direct alias for "localhost:8080" or "localhost:8081").

The second example may be used to force a web page to load locally instead of through a website (you will need to first create a shadow port called "www.bitchute.com" or whatever). Replace "pathname" with the original file path on the target server. The "window.location" should look normal to the Javascript inside the page (such as a bot challenge) as long as the "$password$@test.html" part is in a query string (this is not required by Kraker). The "password" part is your "shadow_secret" as defined in your settings file. The "@" is optional. If present, the shadow port will be removed.

The third example is the same as the second except that a local file name is not present. This command will return the cookie string sent to the server by the web browser. It is possible that a particular cookie may only apply on a particular server path but that functionality is rarely used. You generally just want to get the cookies at the server root.

There is an additional method of forking a dotless shadow port without the Socks5 proxy but it seems to only work from a web browser. The trick lies in how a "localhost" subdomain resolves to an IP address. It seems that most (all?) web browsers ignore the subdomain part. For example:

https://mymusic.shadow.localhost:8081/song.mp3

This request appears at "localhost:8081" with the host name "mymusic.shadow.localhost" which the server can then resolve to a shadow port. If I try this from an external app like my favourite video player (SMPlayer) then the request will fail with a DNS error. SMPlayer tries to resolve through the system DNS which does not work. I tested Brave and Firefox and this works just fine. The interesting thing about this particular syntax is how it avoids the dreaded "invalid security certificate" problem. The Kraker server certificate covers subdomains on "shadow.localhost" (because subdomains on "localhost" are not allowed). Just something that I thought was fun to implement but I'm disappointed that the trick only works in a browser. Oh well.


Advanced hacking: passing cookies with the Accept header; zz-location and zz-set-cookie

This is functionality which can only be invoked from a Javascript program. Cookie strings tend to be rather long because they often contain multiple cookies. Instead of passing the cookies as a parameter in the URL string, the "accept" header may be used. Here's an example fetch statement:

fetch ("http://localhost:8080/https://anysite.com", { headers: { accept: "**" + cookie } });

The cookie string must be prepended with a double asterisk. The proxy server will change the "accept" header value to "*/*" and put the cookie string in a "cookie" header. Sometimes, the browser will emit an OPTIONS pre-flight request prior to sending the request specified in the fetch statement. Since the destination server can refuse the request, Kraker will automatically greenlight it without sending it on (valid only for "localhost" or a localhost shadow). This is true for any OPTIONS request, regardless of the reason.

New for version 4e: An alternate value for the "accept" header may be specified (example: "**text/html**" + cookie).

Two secondary functions may be invoked by setting the "accept" header with or without a cookie (a simple "**" is all that is needed). The fetch statement does not provide any good way to control redirection. Also, cookies returned by the server may be hidden from a Javascript program by the web browser depending on the parameters provided with the cookies.

The proxy server will detect and delete the "location" header and return its value as "zz-location". The "set-cookie" headers (there can be more than one) will be copied to "zz-set-cookie". These additional headers are exposed via "access-control-expose-headers". Note that, to avoid the misapplication of cookies, the "set-cookie" headers are always deleted for "localhost" or a localhost shadow.


Advanced hacking: shadow ports and website mimicry

The shadow port is a new feature of the Kraker Local Proxy Server which binds the functionality of port 8080 (the HTTP port) and port 8081 (the previously unused HTTPS port) with the Socks5 port at 8088. Put simply, a shadow port serves as an alias for a website when it is desirable to fake out the web browser. A simple example would be embedding a website which has the "x-frame-options" header set to "same-origin". This means that you cannot run the website in an iframe that does not have the website as its origin. I have encountered this issue when hacking with Alleycat Player (which allows a web page to be embedded in the iframe of a video viewer). You will get a warning from the browser that embedding is not allowed. The only way to get around this is to employ a shadow port.

Shadow ports can only be used if your browser is set up to use port 8088 as a proxy. The domain name passed to the Socks5 port for DNS lookup can be flagged for routing through port 8080 or port 8081. This allows the headers to be modified before the request is sent to the destination. For example, removing the problematic "x-frame-options" header so that embedding won't be blocked. There are other uses like stealing cookies or routing the website through another proxy server to hide your IP address. This also enables advanced hacking techniques for, say, cracking the Cloudflare bot challenge. If the Javascript code inspects "window.location" to determine the source of the script then this could prevent running the bot challenge from a local file. Not all bot challenges do this but, for those that do, there is no way around it without using an extension or a modified browser (until you figure out how to disable the location test).

Setting up a shadow port is really easy: http://localhost:8080/@@proxy@~https://www.bitchute.com

Run the above command and then run "http://proxy". Voila. Bitchute is running under an entirely different domain. This works because Bitchute uses relative links inside of its pages. That is, the links do not specify "www.bitchute.com" as the domain. The page was loaded as "proxy" so that is the address where all of the relative links will go to. Use your Network Monitor tool to verify this. However, try clicking on a video link. Bitchute will tell you that an error occurred. Oops. The Bitchute server won't honour your request because the "referer" header is wrong (clicking the link generates a POST request and not a GET request; you can right-click the link and open it in a new tab). The proxy server strips off both "origin" and "referer" by default. Go back to the above command line and type "**" after the tilde. The tilde is important because it blocks excessive output in your server console and it allows all headers, such as cookies, to be returned in the server responses.

The video link will now work without any problem. The proxy server is sending the correct "referer" header to satisfy the Bitchute server. Cautionary note: if the web browser has keep-alive sockets still open then you may need to wait a minute or two for the new setting to apply. Now let's take this to the next step. We want to mimic the Bitchute domain because any direct links to "www.bitchute.com" will bypass the shadow port and go directly to Bitchute. Also, the "referer" is wrong for requests sent to any domain other than "proxy" and some resources may not load (this is not the case with Bitchute but it may be true for other websites).

For security reasons, you need a password to activate a dotted domain. A dotless domain like "proxy" is not an issue because it can't be used to break browser security in devious ways. For example, suppose you were logged in to Facebook or Twitter. It would be possible for a malicious web page to create a shadow port and do naughty things with your account.

Open _settings.txt and create your password like this: $shadow_secret=password$

The Kraker Local Proxy Server is not exactly a hot target for hackers (I'm planning for the future here) so I won't warn you to use a strong password because that would be silly at this point in time. Just replace "password" with something that is easy to remember and easy to type. We can move on once you've saved and reloaded the settings file.

http://shadow/@
http://shadow/@password@www.bitchute.com@$~**https://www.bitchute.com

A shadow port called "shadow" is already defined in the proxy server. It is meant as an alias for the longer "localhost:8080" or "localhost:8081". Run the first command line shown above to see that "shadow" is already set up for both HTTP and HTTPS. You will also see "proxy" which we were playing with earlier. The list is shown only on the server console for the obvious security reason. Note that each shadow port has a port number. By default, HTTP is port 80 and HTTPS is port 443 but you can specify any port number (append ":" and a port number to the shadow name). Technically, you can use any port number you like because the request does not actually go to a real port but your web browser may disallow certain port numbers. Also, a non-standard port number won't work for mimicking a real domain like "www.bitchute.com".

Run the second command line to set up the shadow port for Bitchute. Note the dollar sign before the tilde in the final parameter. This indicates that the shadow port must be treated as encrypted so it must be routed through port 8081 instead of port 8080. The proper domain for Bitchute is HTTPS so the web browser is expecting to see an encrypted connection. Note that we still need the "**" so that the "referer" is not blank. Now try opening the Bitchute website at "www.bitchute.com". Oops. We have another problem. The security certificate is invalid. There's a good reason for that. The web browser is not attempting a secure connection with Bitchute but with the proxy server whose security certificate does not cover the Bitchute domain. We'll get to certificates later but, for now, just tell your web browser to accept the invalid certificate. Browse around the site to verify that everything is working.

What is happening here is the same thing that you would encounter with, say, a corporate proxy server set up to monitor what the employees are doing on the corporate computers. In that case, the server intercepting the request would forge a proper certificate for the Bitchute domain to avoid the "invalid certificate" issue. In order to do this, every computer that connects to the proxy must have a signing authority certificate installed else everyone would have the same problem that we just had. See the next section on creating a self-signed server certificate.


Advanced hacking: creating a self-signed certificate authority and server certificate

First, download these two files: certificate.htm and jsrsasign-all-min.js (original source: https://github.com/kjur/jsrsasign).
Place the files in your Kraker home directory. Now start the app with "http://localhost:8080/certificate.htm".

Use the View button to observe the current state of the server certificate with the default name of _https_crt.pem. This should already be in your Kraker directory along with _https_key.pem. These files are needed to connect to the HTTPS server at "https://localhost:8081" or "https://shadow". You are going to replace both files and create a new certificate authority in _auth_cert.crt. Delete the RSA key file (_https_key.pem) because the app cannot overwrite it. Now press the "Create Key" button. The green status window will show "Working" for a short time as the app computes a new RSA key. If you get a "Failed" message then you forgot to delete the file (there is no other possible reason for the failure).

Now you have a brand new RSA key. This is important because the original key is public and it can be abused to attack anyone who relies on a certificate authority based on that key (I'm not actually sure of this but it doesn't hurt to play safe). Next, set up the certificate authority under the "Subject" header. Fill in the four fields with whatever you like and then press "Create authority". This should not take any time at all. Press the View button to verify that the certificate contains the correct info. The certificate is good for 10 years.

The whole reason for doing any of this is to change the server certificate in order to include the domain names of HTTPS sites that you want to mimic with a shadow port. This is important because your browser might not allow you to accept an invalid certificate. This may be because the site is on the "HTTP Strict Transport Security" preload list. See here and here.

Now you're ready to create a new server certificate. All you really need is a name in the "Common Name" field and a list of the sites that you want to authorize under "Subject Alternate Names". You should have, at the very least, these entries:

shadow, localhost, *.shadow.localhost

Add anything else you need after that. You are allowed to use any combination of blank lines, spaces and/or commas as separators. Next, delete the certificate file and then press "Create certificate". Verify that you got what you asked for. The process is like a cascade. The RSA key is independent. The RSA key is needed to create the authority. The key and the authority are needed to create the certificate. Next, restart the Kraker HTTPS server to load the new RSA key and certificate. Execute this command in your browser url bar:

http://localhost:8080/?restart=crt,key

You can specify the name of the certificate file followed by a comma and the name of the key file. Leave blank to use the default. This allows you to switch among multiple certificates (you can have multiple key files as well but there's no good reason to do that).

How to install the certificate authority in your web browser (Windows 10)

Firefox Tools >> Settings >> Privacy & Security >> Certificates > View Certificates >> Authorities >> Import
Waterfox Tools >> Options >> Advanced >> Certificates >> View Certificates >> Authorities >> Import
Pale Moon Preferences >> Preferences >> Certificates >> View Certificates >> Authorities >> Import
Chrome-based Settings >> Privacy and Security >> Security >> Manage Certificates >> Trusted Root Certification Authorities >> Import

The Firefox-based browsers manage their own certificate store but the Chrome-based browsers use the store provided by the operating system. Import your certificate authority and you're done. You can also right-click on the file to launch the certificate installer (the extension must be "crt").


Advanced hacking: the internal commands (timeout, vpx, key)

The HTTP/HTTPS proxy employs a 30-second timeout for a connecting socket and a 3-minute timeout for an idle socket (there are no timeouts in the Socks5 proxy). The first timeout is employed to guard against an unreliable third-party proxy server which may connect but fail to respond promptly afterward (your computer's operating system allows 21 seconds for a server to connect). The idle timeout will terminate a connection if no traffic has been detected for the time period. The default time is generally long enough to not interfere with normal operation though it is not unusual for a browser or other application to attempt to keep an idle socket open for a longer period.

The timeout internal command (format: !timeout:15) supports two modes of operation: a negative number of seconds for the connection timeout or a positive number of seconds for the idle timeout. There is no maximum and the minimum timeout period is 5 seconds.

Similar to the "VPN" option provided by the Socks5 proxy, the HTTP/HTTPS proxy supports the use of a third-party proxy at the level of an individual connection. This allows an application to use any number of proxies for web scraping or whatever purpose (format: !vpx:ip:port:username:password). By default, the Kraker proxy does not validate the security certificate for HTTPS connections. Certificate validation is not needed for targeted file downloads but might be useful in some instances. Use a terminating colon to enable validation (!vpx:: if no proxy is specified).

The default behaviour is to delegate DNS lookups to the third-party proxy server. This is considered more secure since it prevents a potential attacker from deducing your location from your DNS access pattern (especially if you are using the DNS service provided by your ISP). It is what the "security experts" tell us so Kraker employs that policy. If you wish to enable local DNS for specific domains then you can do so in your settings file:

[? anyserver.com VPN:] or [? anyserver.com VPN:1.2.3.4] or [? anyserver.com 1.2.3.4] or maybe [? .com VPN:]

Setting the IP address directly is the best option since this totally avoids a DNS lookup. If you're using an untrusted proxy (which is probably what you're doing) then maybe trusting it with your DNS is a bad idea. I don't know because that depends on what sort of dastardly business you may be up to.

One word of caution: some proxy servers will try to hijack your HTTPS connection. The reason is probably to protect themselves from being complicit in the trafficking of, say, child porn. I have found that almost all servers located in the United States do this. If you want to use those servers then you have no choice but to disable certificate validation. Such servers won't work directly from the Socks5 proxy because the web browser will catch the forged certificate. This information may not apply if you are using a paid proxy service. I'm talking about the thousands of free servers that exist for whatever reason. Free servers tend to be horribly unreliable in any case.

New for version 4e:

Policies have been implemented for the proper handling of cookies on shadow ports and for securing the cookies from abuse by potential attackers. Kraker will observe the state of the request header "origin" and the response headers "access-control-allow-credentials" and "access-control-allow-origin" in order to correctly inform the browser that credentials are allowed. The target is the "a-c-a-o" header which must match the "origin" header else the browser will block the transaction. This functionality was implemented in version 4d but without any security provision. The risk is that a potential attacker can take control of a shadow port with cookies to hijack a secure session. For security, the shadow port must include an access key, as follows:

[? www.bitchute.com SHD:$~!key:abc123|*https://$$$]

An application that wishes to include credentials on the shadow port must transmit the access key (as shown below). The access key may appear anywhere in the path string. If the access key is an empty string then nothing needs to be appended to the path but the shadow port will be open to abuse.

https://www.bitchute.com/pathname$abc123$ or https://www.bitchute.com/$abc123$pathname


HTTP Toolkit -- if you don't have it then get it here

There must be some other (much simpler) tool like this out there but I haven't found it yet. This one has a lot of bells and whistles that I don't need. In any case, my problem is that some sites try to block me from hacking them by abusing the Javascript debug command to crap out the browser's inspector tools. That's when I immediately start up the HTTP Toolkit. For a while, I just used the Firefox browser patch but running a second browser is not what I want to do. HTTP Toolkit has an interception port available so I added a little trick to Kraker to take advantage of it. Problem is that the port is HTTP and not Socks5 (like Tor is). The "VPN" option won't work for this without an angle. The angle is to specify the IP address as "0.0.0.0" which will trigger output through the same process that Kraker uses for I2P. The port number is 8000 so try it out. Other thing you should do: change the certificate authority. Replace the authority and key with your own. In my version, the files (ca.pem and ca.key) are located here:

C:\Users\User\AppData\Local\httptoolkit\Config


Performance notes and the Cloudflare Bot Fight Mode

The HTTP/HTTPS proxy employs a socket reuse policy to avoid the time cost of opening a new connection for every transaction. This behaviour is not linked to the status of the incoming connection. An idle socket will be kept open for 30 seconds (not configurable). A longer timeout would run the risk that the server might close the connection prematurely. It is possible that a server might time out in less than 30 seconds but I have not seen such a case. The TLS session can also be reused for another socket to the same server (there is no reuse after the sockets are closed). The performance improvement is somewhat noticeable with DNS-over-HTTPS but I have not attempted a detailed assessment. Your mileage may vary, I guess.

Additionally, the Socks5 proxy (which is used by the HTTP/HTTPS proxy) employs a connection retry policy which can sometimes help with a stubborn server. The policy is to retry the connection in 3 seconds if contact with the server fails within 12 seconds. That is, the server connects but then disconnects. This can happen if the server is refusing connections because it is too busy. Or the server could just be flaky. My observations indicate that the retry policy can rescue a failed connection attempt about 10% of the time. Your mileage, of course, may vary.

I hate (HATE!) the Cloudflare bot protection. In the case of an HTTP connection, it is no protection at all, really. I discovered that Cloudflare looks at certain header names for proper case usage. The affected headers are: Host, User-Agent, Accept, Accept-Encoding, Accept-Language and Connection. I wrote some code to correct these headers and put them at the beginning of the header stack.

Header names are converted to lower case by Node.js and this makes sense since the HTTP specifications state that case in header names is not significant. It also makes my code easier to write since case does not need to be considered. A server is not supposed to reject an HTTP transaction based on the case of header names but we're not talking about normality anymore. It is war out there and Cloudflare is determined to win no matter what sort of inconvenience that incurs. Ignore the specs? Sure, why not? I shouldn't be having to do this.

My main target is https://banned.video. This is an Infowars site and it gave me a problem a while back until I discovered an alternative domain (Infowars has a lot of them). Without fixing the headers, the HTTP version of the site simply returns a status 403 and no data. With the header fix, the site will relocate to the HTTPS version. That's progress since I can access another unrelated site that doesn't mind doing HTTP. However, I cannot get into the HTTPS version of that other site. I get stopped dead with not even a bot challenge to solve.

The problem with HTTPS is the negotiation (or TLS handshake) that is needed to establish an encrypted connection. The negotiation protocol is open-ended, meaning that there are a million ways to do it. This means that the specifics of the negotiation can be used to "fingerprint" the incoming connection. In much the same way, browsers can be fingerprinted based on the details of HTML rendering. So Cloudflare takes a fingerprint of the TLS handshake and will reject the connection if the handshake doesn't look like it is coming from a web browser. This is a big deal because it locks out Node.js and, really, any tool which is not based on browser code. It locks out non-transparent proxy servers so banned.video cannot be accessed through a corporate proxy and I don't know why Infowars doesn't care about that customer base. For that matter, I don't understand what the problem is in the first place. It's been like this for two years but I never heard of a bot attack on Infowars. I'm sure they get attacked from time to time but sheesh.

There are actually very few sites that do this. Most of the ones that I have seen are pirate video sites. That seems to be a pattern since Banned Video is also a video site. It suggests that web scraping might be the issue and not any kind of attack. Whatever. I can't figure out how to get into the site. There is no bot challenge, just a hard stop. There is currently no tool to modify the TLS handshake in Node.js and this seems to apply across the board though it apparently can be done with Golang (also known as Go). In any case, I'm not inclined to bother with trying to solve the Cloudflare bot challenge since I'm not even getting that far with the sites that I want to hack. Whatever. This is The End for now.

The HTTP Toolkit guy has more info here. (Note: his advice on bypassing the fuckery is outdated. Unfortunately.)


Go to the Alleycat Player instruction manual.