Think Forward.


no available biography yet.


In the age of AI Engineering; the frantic craze to replace Software Engineers

4 years have passed, and I have been engineering software for machine learning models. I have seen models for pest disease identification, chest conditions localization and detection, food classification and identification and now predominantly chatbots for generally anything. Somehow, the goal now is to automate the work of software engineers by developing models that are able to build end-to-end software. Is this goal profound? I think it is, and I say, "bring it on, let's go crazy with it". There has been uncertainty and fear associated with the future prospects of Artificial Intelligence, especially with the replacement of software developers. Despite this uncertainty and fear, a future where it is possible to build applications by just saying the word seems intriguing. In that future, there would be no application solely owned by "big tech" companies anymore because everyone can literally build one. The flexibility and ease of application development would push popular social media companies like Snapchat, Instagram etc. to make their APIs public (if not already public), portable and free in order to maintain their user base. This results in absolute privacy and freedom for users and thus makes it a desired future. As a rule of thumb, automation of any kind is good. It improves processes and speeds up productivity and delivery. However, one could argue that whenever there is a speed up, there is a time and human resource surplus. Because in the history of humanity, we automated food production by way of mechanized farming and created enough time and manpower surplus which we used to create abstractions around our lives in forms of finance, and industry, etc. So, in the race to automate engineering, what do we intend to use the time and manpower surplus for? But this question is only a different coining to the very important question: "what are the engineers whose jobs would be automated going to be doing?". And the answer is that when we think of the situation as a surplus of manpower, we can view it as an opportunity to create something new rather than an unemployment problem. For example: As a software engineer, if Devin (the new AI software development tool that was touted as being able to build end-to-end software applications) was successfully launched and offered at a fee, I would gladly pay for it and let it do all my tasks while I supervise. I would then spend the rest of my time on other activities pleasing to me. What these other activities would constitute is the question left unanswered. Would they be profitable, or would they be recreational? Regardless, the benefits we stand to gain from automating software engineering are immeasurable. It makes absolute sense to do it. On the other hand, though, we also stand to lose one enormous thing as a human species: our knowledge and brilliance. Drawing again from history, we see that today any lay person could engineer software easily. This was not possible in the early days of Dennis Ritchie, Ken Thompson, Linus Torvalds etc. More and more as engineering becomes easier to do, we lose the hard-core knowledge and understanding of the fundamentals of systems. For example, today, there is a lot of demand for COBOL engineers because a lot of financial trading applications which were built in the 90's needs to be updated or ported to more modern languages. The only problem is that no one knows how to write COBOL anymore. It is not that the COBOL language is too old. In my opinion, it is rather that all the engineers who could have learnt to write COBOL decided to go for what was easier and simpler, leaving a debt for COBOL knowledge. So, one big question to answer is whether there would be any engineers knowledgeable enough to recover, resurrect or revive the supporting systems to automated AI systems in scenarios of failure just like in the case of COBOL? When we make things easier for everybody, we somehow make everybody a bit dumber. AI Assisted Engineering: Having discussed the benefits of autonomous software engineering tools and also demonstrated that full automation could cause a decline in basic software engineering knowledge, what then is the best means by which automation due to machine learning could be applied to software engineering? Assistive engineering. This conclusion is based on studies of pull-requests from engineers who use copilot and those who do not. Let us present some examples: `console.log` is a debugging tool which many JavaScript engineers use to debug their code. It prints out variable values wherever it is placed during code execution. Some engineers fail to remove `console.logs` in their code before committing. Pull requests from engineers who use Github's copilot usually do not have any missed `console.log` entries while those from engineers who do not use copilot, do. Clearly, the assistive AI tool prompts engineers who use them about unnecessary `console.logs` before they commit their code. Another example is the level of convolution in code written by AI assistants. With copilot specifically, it was observed that engineers grew to be able to write complicated code. This was expected due to the level and depth of knowledge possessed by the AI tool. Sometimes though, this level of convolution and complication seemed unnecessary for the tasks involved. Amongst all the applications of ML to industry, it is observed that full autonomous agents are not possible yet and might ultimately not be possible in the future. Really, if humans are to trust and use any systems as autonomous agents without any form of human intervention or supervision, it is likely not going to be possible with ML. The reasons being the probabilistic nature of these systems and the inhumanity of ML. The only systems achievable using ML that humans would accept as autonomous agents are superintelligent systems. Some call it artificial general intelligence or super AI systems. Such systems would know, and reason more than humans could even comprehend. The definition of how much more intelligent they would be than humans is not finite. Due to this, an argument is made that if the degree of intelligence of such superintelligent systems is not comprehensible by humans, then by induction, it would never exist. In other words, we can only build what we can define. That which we cannot define, we cannot build. In the grand scheme of things, every workforce whose work can be AI automated, is eventually going to be "somewhat" replaced by Artificial Intelligence. But the humans in the loop cannot be "totally" replaced. In essence, in a company of 5 software engineers, only 2 software engineers might be replaced by AI. This is because in the end, humans know how to use tools and whatever we build with AI, remain as tools, and cannot be fully trusted as domain experts. We will always require a human to use these tools trustfully and responsibly.