One of the things that has often confused me is how little good advice there is for reading large files efficiently when writing code.
Typically most people use whatever the canonical file read suggestion for their language is, until they need to read large files and it’s too slow. Then they google “efficiently reading large files in <lang>” and are pointed to a buffered reader of some sort, and that’s that.
or DNS exfiltration over DNS over HTTPS (DoH) with godoh “Exfiltration Over Alternate Protocol” techniques such as using the Domain Name System as a covert communication channel for data exfiltration is not a new concept. We’ve used the technique for many years at SensePost, including Haroon & Marco’s 2007 BH/DC talk on Squeeza. In the present age this is a well understood topic, at least amongst Infosec folks, with a large number of resources, available, online that aim to enlighten those that may not be familiar with the concept. There are also practical techniques for detecting DNS Tunnelling on your network.
Javier had a simple shell script he posted to our internal chat a few days ago. It’s goal was to pull all the IP ranges for a country in preparation for a footprint from https://ipinfo.io/ (Let’s use PL as an example). Given this involved pulling multiple webpages, I was interested to know what the most efficient approach to this in the shell would be. Truthfully, the actual problem, pulling data from the site or gathering BGP routes, didn’t interest me, I wanted to look at how to do mass HTTP enum most efficiently with curl.