Generative AI, LLMs, AI-assisted coding and such remind me of something I heard when I was learning Japanese almost 20 years ago – the “word processor syndrome” or “word processor language disorder”.
When I was learning the Japanese language, I remember being told about a concept called “word processor syndrome” or “word processor language disorder”. The premise was that the rise of electronic devices and word processors would lead to people becoming less proficient at writing Kanji (the Japanese pictographic script) though they would still be able to recognise and read Kanji well. They would just rely on computers (word processors, computers, mobile phones, etc.) to do the conversion while entering text and would only be skilled enough to pick the correct one from a set of options. The anxiety was that this woud impact traditional literacy and writing skills.
I see parallels to AI-based code generation here – there are many who seem to worry that relying on AI coding tools will impact our “traditional literacy and skills” to write code. There are others who are flag-bearers of new technology and are calling out the luddites for being stuck in the older ways. I think we probably still want to continue to learn how to program and design systems, but we will start to suffer from an “AI Coding Syndrome”. We will be able to prompt for some code to be generated and we will be able to read it and verify that it seems to be correct, but we might over time, lose the ability to write it easily and effectively ourselves. I see this at least for some of the simpler things already, e.g., create a rake task that does this and defaults to using these values, write code that exports the data to a database, create a parser that can handle this kind of text, and so on. It will increasingly be more productive to produce the code using a tool where less thinking is needed.
I do, however, agree with some who are concerned that a lot depends on what the models are trained on, and we will run out of quality code to train models on for newer things. We will have to wait and see how this evolves, but I expect that programming will continue to exist and people will still need to learn programming (and they should) to do a good job – just that the bar may be raised and it’ll be expected that people can achieve more in the working day.
For now, I feel that if I sign off some code or documentation, it is my responsiblity to assure people that it is of a certain quality – so, the ability to review code and ensure that it is fit for purpose still rests with me. Over time, it’s likely that the LLMs “writing” code will be able to provide that assurance and we will be able to trust the final output more – much as how almost no one checks if a C++ compiler produced the exact assembly correctly.
What do you think? If you have some comments, I’d love to hear from you. Feel free to connect or share the post (you can tag me as @onghu on X or on Mastodon as @onghu@ruby.social or @onghu.com on Bluesky to discuss more).