lars

Hello and welcome to my little nock of the internet. I am Lars (a.k.a looopTools) and I am Software Engineer, Research, and Authore from Denmark. I am passionate about storage optimisation, so that we may be able to handle the enormous amount of data we are generating every day. Both interms of reducing the actual physical storage need without data loss and reducing the size of our data centres too reduce the energy consumption needed. This topic is what my research currently is foucsed on. As an author I write fantasy novels (work in progress) aimed a young adults, with which I hope to inspire the younger generations to question what they are taught and formulate their own opinions. I am currently working as a PhD Fellow at Aarhus University, Department of Engineering with the Group for Network Computing, Communication, and Storage (Net-X) under the supervison of Daniel E. Lucani.

I have 10+ years of experience from the software industry through either full-time or student programmer positions. I have predominately worked back-end or semi-low level, but have in periods work as front-end developer.

On this site you can find my CV and techincal blog.

Distributed Denial of Services, Part1 - Introduction & Theory

May 12, 2020

This blog post is the first in a series of post where I will explain a type of cyber-attack called Distributed Denial of Service (DDoS). The purpose of these post is

  1. Spread the knowledge of DDoS
  2. Provide an understanding of what DDoS
    • How to perform DDoS
    • How to defend against DDoS
  3. How to make a more sophisticated DDoS attack

The posts will be split into what is DDoS, history/why it is used, theory, execution, and implementation, but I don’t know how many posts yet. In the end, I will compile all the posts into a single PDF, probably with some editing and with the hopes that it may be able to serve as study-material.
I have to emphasis that conducting a DDoS attack may (we will get into that) be illegal and these post is solely meant for study purposes.

[IMPORTANT:] As I intend to publish this work in the end, you might see me go back and edit these post from time to time, either add, remove, or edit the content.

In this post, I cover what a DDoS attack is, what it has been used for in the past, and the “legality” of DDoS.

What is a Distributed Denial of Service Attack

Almost, since the introduction of computers being connected by networks, there have been cyber-attacks. The purpose of these attacks varies and are very diverse. Within these attacks, there are groups which aim to disrupt a provided service and the access to it, by disallowing access to the service. These are, in general, referred to as Denial of Service (DoS) attacks.

To understand how the DoS work, we must understand what they are trying to do. When you access a service provided over a network, be it the internet or another network, you are interacting with a form of a web server. Yes, even if you access a cloud service. You interact with the service through web request, e.g. HTTP or HTTPS, often using multiple requests to complete a task. However, it is rarely just you who are interacting with a service, in many cases, millions of people are accessing the same service at the same time, think of of the Google Search site, Facebook, or Twitter. Now, a web server is just a computer, and it has limited resources, including limits on how many web requests can be served and how many open connections can be maintained. If you exceed these limits, then the computer will either drop a request and not serve it or take a long time to serve it.
This is what a DoS attack takes advantage of, but overloading a web server with a massive amount of web request. Resulting in some requests being dropped and not served.

This is the basics of DoS attack, what then makes a distributed DDoS attack? Machines are getting increasingly more power they have become able to serve more and more request, and we have developed more complex defence strategies against DoS attacks. This means that it is more challenging to execute a DoS attack using a single machine. Therefore, to increase the chance of an attack to succeed attackers started distributing the attacks such that they where performed from multiple nodes at the same time, increasing the number of possible requests and DDoS was born. Basically, you make all nodes send a request to the same server.

What has DDoS been used for?

Now, we know what DoS and DDoS, what is used for? Well quite frankly, what it says in the attack name. But it can be used for different reasons. The most obvious one is harassment of service with criminal intent. An example is with the purpose to get a ransom for stopping the attacks. This version of DDoS is called Ransom DDOS (RDoS), and groups such as DD4BC and Armada Collective have used this style of attacks in 2015, 2016, and 2017. Radware has this pretty good article Ransomware Attacks: RDoS (Ransom DDoS) On The Rise on this topic. Then there is the “chaos” approach, where DDoS is used to disrupt service for no other reason than “why the heck not”.

Another usage of DDoS is virtual sit-ins protest, which has become an integrated part of Hacktivism. To explain a sit-in is in the physical world a tool used by activist, to block access to a service or building by blocking its access point. It is the same idea of DDoS used as a sit-in just online. Examples of such sit in are well documented, for instance, sit-ins was used against the airline Lufthansa German precedent upholds online civil disobedience

The “legality” of DDoS

So we have a criminal version and an activist version. But is it legal? So I do not know any country where RDoS is legal, and that is for good reasons. But, cases of virtual sit-ins are up for discussion in many countries and have been for some time, if it is activism or not. Germany seems to think it is, while the USA claims it is criminal. So in general it is, you need to check for your self. Sorry, this cannot be stated more precise.

Using Fedora and why

May 9, 2020

TL;DR: The story of how I found Fedora and why I keep using it.

I have been using Linux on and off, for more than 20 years and the first time I encountered it was on a school machine running Yggdrasil Linux, the first ever paid Linux distribution. Now back then I was used to Window, Dos, and Apple System, so Linux was weird, yet strangely familiar. Already back then Linux and Apple System had things in common, at least Yggdrasil and Apple System. Since that encounter, I only had sporadic meetings with Linux and only used it when at school or in a virtual machine, until 2005. In 2005 I got my first, and only, Windows laptop running Windows XP (good times), but I had no real clue about antivirus software and other security measures, which resulted in 3 months of virus and problems with Avast, AVG, and BullGuard. Additionally I had to reinstall my machine from scratch 2 times in that period. At this point, I hadn’t really touched Linux for roughly 4 years and remember only on systems people had already setup. But I remembered it and that it was more familiar to Apple System. So I started looking and back then Debian was raining king of “easy to install” and I tried it in a virtual machine first and thought what the heck, let us try. So it became my daily drive, dual booted with Windows, as I need Word and PowerPoint for specific tasks and let us be honest at this time OpenOffice sucked. I also had some weird problems with my WiFi card for some reason I never truly figured out and it pissed me off. However, in 2004 a new distribution had seen its birth, Ubuntu, the distro for newbies and back then I was newbie. I happily left Debian behind and jumped at the new shiny distribution. I used for a solid 2.5 years until I went back to Apple and more specifically macOS Leopard 2008 and say what you want Leopard and Snow Leopard was the two best iterations of OS X. However, I had a virtual machine with Ubuntu and I used it almost everyday for some weird task OS X could not do at the time or I had gotten use to a program only available on Linux. Then came spring 2009 and I was asked to make a project for friend, as cheap as possible and as stable as possible. Another requirement was the project had to be done in Java. This gave me the freedom to look outside Windows and the ability to test the project on different operating systems. As I squared the internet for options, a lot of people said Debian or RedHat for servers, for some reason CentOS was never mentioned. I later found out that people back then in general regarded it as a useless toy. But, then I found some who said that for a hobby project RedHat maybe to expensive, so try OpenSUSE or Fedora. I was intrigued, I had heard of OpenSUSE before but never used and Fedora I had never heard of. So I spend some time setting up both servers and a long the way kept strict notes on what I did, the problems I ran into, and how I fixed them. Essentially this was my first lab notebook. Afterwards I compared my notes and came to the conclusion that Fedora had been way easier to setup and trouble shoot, so I went with that. During the process I had also had time to setup both as a desktop system and Fedora had been much easier than Ubuntu, at least for me. So I decided to swap my Ubuntu VM for a Fedora one and here we a decade later and I am still using Fedora, not only as a VM. But also as my main operating system at work.

So why do I keep using Fedora? What is it that keeps me in the blue? Is it the colour? Is it the hat? Is it familiarity? Let me answer two of those really quick, of course it is the hat and not it is not the colour. Familiarity certainly has something to do with it, it is always nice. But it is not my main reason for always returning to Fedora. I actually have a few.

First of yum/dnf vs apt. One of the reasons, I decided to look for another distribution than Debian and Ubuntu in the first place, was that I had a lot of problems with apt-get at the time often corrupting my system when I did updates. Which drove me to the brink of insanity and beyond. It made me hate .deb and apt based distributions for quite a while. I have only once had similar problems with yum once and that was in Fedora 17. Now this issue has been fixed in both Ubuntu and Debian and I do use both on servers from time to time today. Another thing with yum is the command names makes more sense to me and instead of having to use apt-SOMETHING to do a search or install, everything is bundled into one command in yum which I like a lot. It makes the system must more consistent to use and it is easier to remember commands.

Next Gnome or more precisely Vanilla Gnome. Gnome has for desktops and multi-monitor setups always been my favourite desktop environment. But most distributions, have always add their own sugar. Majaro is in my opinion a horrific example. I want a stock gnome a work my way up from there, I want to enable things not disable things. Also, sorry Majaro/Ubuntu/Other Gnome Theme designers, most of the available themes seems tailored to fit either Mac or Windows users looking to adopt, and neither option often does it well. Therefore it is a huge bonus for me that Gnome comes pretty vanilla in Fedora.

Then we have updating to the next version. So I usually update to newest Fedora 3 months after its release, just to get the worst bugs that maybe, out of the way. In the “old days” this meant a clean install. But today we can use a dnf plugin for system upgrades. Can be installed like this; dnf-plugin-system-upgrade. Which makes it super easy to upgrade to the next version of Fedora. I do need some time to clean up old packages and left over dust from the elder days :p Now! I do love rolling release and I am envious of ArchLinux and Majaro ease of update. But this is as good as it gets without being rolling.

Finally and this is actually the big one. Ease of use. For the longest time I recommend Linux Mint to newcomers. It is a good solid distribution and in my opinion better than Ubuntu for newcomers. But since Fedora 30, I actually switched to recommending Fedora. The installation processes of Ubuntu, Mint, and Fedora has become very similar and it is super easy. Even installing third-party non-free drivers is easy. Additionally, when you go to websites for help, the Fedora community has become much more mature and welcoming to newcomers, than we where five years ago. This makes Fedora much more approachable than it use to be. Which means I quickly can get a new PhD student, PostDoc, Sys Admin, developer, or some else dirty quickly without having to babysit the person through the whole process. Increasing both out productivity. I also manage servers for projects and some use RedHat, others CentOS, and again others Fedora Server, and the benefit of having a shared ancestry and close family bond, makes it easy for me to switch between these systems with minimal effort, again increasing productivity.

So these are the reasons I use Fedora and keep on using Fedora. Now I wanna make it clear that I have jumped to other distributions on occasion, I used Manjaro for 2 years, Arch from time to time, Gentoo, and SlackWare. The later still has a very special place in my heart.

Converting Intergers to Vector in C++

Apr 7, 2020

I have been working on improving the storage footprint on disk for a system I am working on for the university. The system is currently using JSON even for binary data, which was not a good design choice for multiple reasons. I have been working on replacing JSON with pure binary data without the overhead of textual representation used in JSON. One of my problems have been numbers, though luckily only natural numbers and mainly unsinged integers and I know that there are a lot of tools like BOOST that provides serialisation for this. But, first I do not really like BOOST, no offence, and second other solutions have depended heavily on the usage of if-else chains, which is bad for predictive optimisation provided by -O2 and -O3, so I didn’t really get why they used if-else chains. So I decided to play a bit with templates and see if I could make a converter functions from type Type which is a natural number to a std::vector<uint8_t> and back, without using if-else chains. Though this is not necessarily faster, it will be cleaner.

Now, I will take a bit of inspiration from other solutions and use left and right shifts and I wanna make the functions as simple to use as possible. Also for those who have not used C++ before, you can see templates as Generics.

To do the conversion, we need a natural number as input and it must be able to be any type of natural number in C++, so int, long, uint8_t, and so on, and we need the output as a std::vector<uint8_t>. Now in C and C++ we have the function sizeof which can give us the size of any type or variable in bytes. Therefore we know that our output vector must have the size of our input in bytes. Next we will make the assumption that all natural number types has a size such that size % 8 = 0. Then we can construct the function:

template<typename Type>
void convert_natural_number(const Type input, std::vector<uint8_t>& output)
{
    out = std::vector<uint8_t>(sizeof(input);

    for (size_t i = 0; i < sizeof(input); ++i)
    {
        out.at(i) = (output >> (i * 8)); 
    }
}

A few comments on the function. 1) In the for-loop we have out.at(i) = (input >> (i * 8)); what this does is that it takes the input and right shift with n bytes where is 8 times some offset i where i is depended on the size of our input in bytes. This allows us to avoid having many if-statements which contains specialised shift statements for each length of an integer. 2) The function header: void convert_natural_number(const Type input, std::vector<uint8_t>& input) might be weird for some that are used to use function return values. What I do here is that I parse a reference to our output vector and use it as the output parameter of the function.

To “reverse” the conversion I have made a similar function, though with the output and input types swapped and moving from right shifts to left shifts:

template<typename Type>
void convert_natural_number(const std::vector<uint8_t>& input, Type& output)
{
    output = 0;
    for (size_t i = 0; i < input.size(); ++i)
    {
        Type next = input.at(i);
        next = next << (i * 8);
        output ^= next;
    }
}

The only real comment I have to this function, is that you should always zero your output for such functions, just to be on the safe side.

A final remarks, I have only tested the function with natural numbers, but there should be no problem with floating points either or strings, but please test first.

If you have improvement ideas please write me, I am always ready to learn something new.

./Lars