The nomad developer setup #2: infrastructure as code
In a first article, I shared a quick and easy way to access VScode from any browser. You still need to create a cloud provider account and setup a server. In this second article I will share with you a way to automate all the steps needed from the moment you have created your account to using VScode in the browser. In order to do this I am sharing a GitHub repo at the end of this article. It contains all the Infrastructure as Code (IaC) you need.
IaC is a practice in software engineering, mostly on the devops side, that involves managing and provisioning infrastructure through code, rather than manual processes. It allows for the automated deployment and configuration of infrastructure, enabling consistency, scalability, and version control for your infrastructure.
The repository combines three very powerful tools: Packer, Ansible and Terraform.
- Packer is a tool to create machine images avoiding to re-install everything every time you start an instance.
- Ansible is an automation tool that simplifies complex tasks like configuration management. In a simple yaml file (a playbook) you can install and configure your server(s).
- Terraform is an infrastructure as code tool that enables the provisioning and management of cloud resources using declarative configuration files.
Please check the README carefully, it lists the current limitations and will be updated when the repo evolves.
In a next article I will add even more automation to it using a ci/cd (continuous integration and continuous delivery) pipeline using GitHub workflow to allow you to start/stop this infrastructure as you wish without accessing anything else than a web browser.
Part 4/5: Research, Rants, & Ridiculousness: The Lighter Side of PhD Madness
PhD: the art of turning coffee, chaos, and code into a degree, one panic attack at a time.
- My machine learning model predicted I'd finish my PhD on time. Spoiler: Even AI has a sense of humor.
- Neurotoxicity research: figuring out if it's the toxins affecting the brain, or just the endless hours in the lab.
- Snake venom for drug discovery? Sure, because handling deadly snakes is less frightening than asking my advisor for a deadline extension.
- I told my computer to find a cure for snake bites. It opened a travel site to Antarctica. No snakes, no bites, problem solved!
"Supervised and Unsupervised Learning in 90 Seconds of Reading"
** Brief Definition : **
Supervised and unsupervised learning are two fundamental facets of machine learning, each specifically tailored to handle distinct types of data. In supervised learning, the machine learning algorithm is trained on a labeled dataset, where each data point consists of both input features and corresponding output labels. The goal is for the algorithm to learn the mapping from inputs to outputs based on these labeled examples. In unsupervised learning, the machine learning algorithm is trained on an unlabeled dataset to find hidden patterns, structures, or relationships within the data. Unlike supervised learning, there are no predefined output labels for the algorithm to learn from.
** Intuition 🙂 : **
In supervised learning, envision having a jigsaw puzzle featuring a picture of a dog, where each puzzle piece is labeled with its correct position in the completed picture. The model learns from these labeled examples, figuring out the relationships between the shapes and colors of the pieces and their correct locations.This process, often referred to as the training step, allows the model to internalize the patterns within the labeled data. Subsequently, after training, the model is adept at taking a new puzzle of a dog and precisely assembling it based on the knowledge acquired during the training process.
Now, imagine you have a bag of puzzle pieces without a picture or labels — just a mix of colors and shapes. In unsupervised learning, the model explores the characteristics of the puzzle pieces without any predefined labels or information about the complete picture, identifying groups that share similar colors, shapes, or patterns. The model doesn't know what the complete picture looks like, but it discovers that certain pieces belong together based on shared features. These groups represent clusters of similar puzzle pieces.
In this puzzle analogy, supervised learning entails constructing a model with labeled examples to tackle a specific task, while unsupervised learning involves the model autonomously uncovering patterns or relationships within the data without explicit direction.
"Understanding Overfitting and Underfitting in a Quick 90-Second Read"
Overfitting and underfitting represent two common issues in machine learning that affect the performance of a model. In the context of overfitting, the model learns the training data too precisely, capturing noise and fluctuations that are specific to the training set but do not generalize well to new, unseen data. Underfitting, on the other hand, occurs when a model is enabled to capture the underlying patterns in the training data, resulting in poor performance not only on the training set but also on new, unseen data. It indicates a failure to learn the complexities of the data.
**Analogy : **
Intuitively, returning to the example of the student that we presented in the definition of the machine learning concept, we discussed the possibility of considering a machine learning model as a student in a class. After the lecture phase, equivalent to the training step for the model, the student takes an exam or quiz to confirm their understanding of the course material. Now, imagine a student who failed to comprehend anything during the course and did not prepare. On the exam day, this student, having failed to grasp the content, will struggle to answer and will receive a low grade; this represents the case of underfitting in machine learning. On the other hand, let's consider another student who, despite having a limited understanding of the course, mechanically memorized the content and exercises. During the exam, when faced with questions reformulated or presented in a new manner, this student, having learned without true comprehension, will also fail due to the inability to adapt, illustrating the case of overfitting in machine learning.
This analogy between a machine learning model and a student highlights the insightful parallels of underfitting and overfitting. Just as a student can fail by not grasping the course or memorizing without true understanding, a model can suffer from underfitting if it's too simple to capture patterns or overfitting if it memorizes the training data too precisely. Striking the right balance between complexity and generalization is crucial for developing effective machine learning models adaptable to diverse and unknown data. In essence, this educational analogy emphasizes the delicate equilibrium required in the machine learning learning process.
Grasping the concept of machine learning in just 90 seconds of reading
Machine learning is a branch of the artificial intelligence domain that encompasses various methods relying on learning from data to solve problems such as prediction, classification, dimensionality reduction, etc. Learning from the data means that machine learning systems can analyze patterns, extract insights, and make informed decisions without being explicitly programmed for a particular task. Instead of adhering to predetermined rules, machine learning methods adapt and improve their performance over time. The process involves training models, validating their accuracy, and testing their generalization to new, unseen data.
Intuitively, we can envision the machine learning model as a student in a classroom. The teacher imparts knowledge to the student during what we refer to as the training step for the machine learning model. After the session, the student undergoes a quiz to solidify the concepts, representing the validation step for the machine learning model. Finally, the student takes a comprehensive final exam to test their understanding of the entire course. All of these stages occur gradually over what is termed as epochs in the context of a machine learning model.
In this analogy, each epoch corresponds to a complete cycle of the training, validation, and testing phases. It's like the student attending multiple class sessions, quizzes, and exams to reinforce and assess their knowledge. With each successive epoch, the machine learning model refines its understanding of the data, enhancing its ability to make accurate predictions or classifications in real-world applications. Just as a student becomes more adept through repeated study sessions, the machine learning model becomes increasingly proficient with each pass through the data.
The nomad developer setup #1: A guide for beginners
Fun fact, I first wrote this article on another platform when working on Bluwr in a train. No matter the distance, it is always nice to be able to work from anywhere you want. All you need for this setup to work is an access to a web browser.
In this article I will share part of the setup that I am using. It is the first one of a series where I will be covering the whole setup I am using.
This first article is about how to set up vscode to work from any device with a web browser. Visual Studio Code is a text editor by microsoft. It can be customized with an almost infinite number of plugins. We will be using vscode in a client/server mode. The vscode server will be running on a virtual machine hosted by a cloud provider, the client can be any web browser. We will use the browser to connect to the vscode server. The interface inside the web browser will be identical to the standard vscode interface, and you will be able to edit any file on the virtual machine.
So first you need a host. Any cloud provider will do, the only thing you need is an IP address and a user that can ssh to the host. Side note here, I almost exclusively use ssh keys, never user/password to connect to cloud hosts as it is way more secure.
Once the ssh session started, install docker if not already available on the host. the execute the following command:
docker run -d \
-p 8443:8443 \
-e PASSWORD=”1234” \
We could basically end this article right now. However, there are a few more things I want to talk about. These points took me a bit of time figure out and I thought I’d share them with you:
1. How to make sure you don’t have to re-install all your plugins every time you start a new code server instance
2. How to make sure your settings stored, so you don’t have to manually re-enter them every time you restart your docker container
3. How to set a custom working directory where all your code will be stored
These are all technically achieved using the same principle: bind mount a folder of your host to a dedicated folder in the docker container.
If you look at the container folder structure, you can see that all plugins are installed in the /config/extensions folder. Vscode configuration in the container is stored in /config/data/User/settings.json. If you have been using vscode for sometime and would like to use that same configuration, you can take that existing settings file and put it somewhere on your virtual machine. Finally, to get a defined workspace, you can bind mount the folder where you usually put your code to the one that is dedicated to it in the container.
The full command is :
docker run -d \
-p 8443:8443 \
-e PASSWORD="1234" \
-v "/home/username/vscode_extensions:/config/extensions" \
-v "/home/user/vscode_settings:/config/data/User/" \
-v "/home/user/workspace/:/config/workspace" \
To save money, I only start and pay for cloud resources when I need them. Of course, I don’t repeat all these steps and re-install all the tools I need each time I start a new virtual machine. I use a packer/ansible/terraform combination to create a snapshot that I can use as a base image each time I create a new host. This will be the subject of my next article.
Now, working from anywhere as a digital nomad is really nice and convenient, but does not mean you should work all the time. I made this setup originally only to be geographically free, I still make it a point to have a healthy work/life balance. I have many hobbies and would not trade them for more hours of coding.
Automation existed long before the advent of AI.
Automation, the process of leveraging technology to perform tasks without human intervention, has a rich history that long precedes the rise of artificial intelligence.
The textile industry, in the early 1800s, witnessed the introduction of automated looms that could weave fabric without constant manual operation. Before the Jacquard loom, weaving complex designs required workers who manually operated looms for long hours. The Jacquard Loom laid the foundation for the development of modern computing concepts like binary systems and programming, as its punch cards served as an early form of programming instructions
The mid-20th century brought forth the development of programmable computers. These machines facilitated automation by executing predefined instructions, enabling the automation of complex calculations, data processing, and control systems in various industries.
While AI has undeniably transformed automation, introducing powerful capabilities such as machine learning and cognitive reasoning, it is crucial to recognize that thoughtful application remains key. When used judiciously, AI significantly enhances automation and innovation, ultimately leading to a promising futur.
How Bluwr is optimized for SEO, Speed and Worldwide Accessibility.
TL;DR: Bluwr is Fast & Writing on Bluwr will help you get traffic.
These choices allow us to have a lighting fast website and have great benefits for our writers. Because most of Bluwr appears as static HTML, articles appear first, readers never have to wait for them to load, and search engines have no difficulty indexing what's on Bluwr.com. This makes everything you write on Bluwr easier to find on the internet. It also means that Bluwr.com loads fast even on the worst of connections. Something noteworthy as even a slight delay in loading can significantly reduce the chances of your article being read.
Our goal is to make Bluwr accessible to anybody on the internet, even on a limited 3G connection.
Welcome to Bluwr.
We are glad to see you here, we promised that Bluwr would be released on the 13th of November 2023 and we delivered. Bluwr is unique, we took inspiration from times far before the internet. Bluwr is a bridge between the past and the future, a conduit for thoughtfulness and inspiration.
We built it with maturity and foresight, striving for beauty and perfection.
A text-based platform for times to come, the past and the future seamlessly merging into something greater.
"" - Bluwr.
You are leaving Bluwr.
We cannot garantee what's on the other side of this link: