Software has a problem.
OK, it has many problems. I’ve already highlighted one of them. But this is another important one.
The problem is that software—all software, with no exceptions—sucks. The reason for this is multifaceted and we could spend years and years arguing about who has the larger list of reasons, but in the end it boils down to the proverbial shoemaker’s children: Our development tools are the worst of the worst in software.
Computers are made up entirely out of CMOS transistors, which are all basically identical. It’s just in their organization and structure that they become interesting. Similarly, it’s alright for coding to be made up of simple characters, it can still have structure.
And another missed point.
Digital computers are made up of switches. (Not necessarily transistors, and those that are are not necessarily CMOS.) But nobody programs them as a pile of switches because that’s an insanely stupid level of abstraction to view problem domains through.
In the very, very, very early days of computers perhaps this was done, but that rapidly grew unwieldy and we developed ever more and better ways of organizing things so we could focus on problem domains instead of the minutiae of which switch was pointed which way. Sure enough, underneath all that there were still a myriad myriad of switches flipping on and off, but not a single programmer knows which switch is pointing which way beyond the grossest of levels (like GPIO pins).
Our problem domains have ballooned in complexity from the early days of programming languages, but our programming languages have not kept pace. We are spending more time on minutiae of testing, branching, and looping than we are on problem domains in our code, and complicated relationships in particular are difficult to model without extensive, error-prone effort in single-dimensional lines folded up into bizarre structures of tests and branches.
(This is, incidentally, something that was an active area of research in the 1970s. Computing power and computing complexity has risen exponentially since then.)
As programmers we are doing the equivalent of using a screwdriver as a chisel while building a house. (And, worse, using that screwdriver as the only tool for the whole project.) It’s not a wonder that our work is slipshod and filled with error.
Abstractions are important, but we don’t need to reinvent computing to support them…modern programming languages allow basically limitless amounts of abstraction. Have you ever used a high level library like PyTorch? You can define a very powerful neural network in less than 100 lines of code. You get to focus entirely on the architecture of the network (number and types of layers), and it handles everything beneath the surface: Calculating gradients, iteratively updating weights, etc. Even if you choose to define your own custom layers, PyTorch can automatically calculate the gradients of arbitrarily complex functions you compose yourself. Python (and many other langs) are extremely flexible.
Human language is “1D” by your definition, but you can still use it to talk about very high level ideas.