While recently 'tidying my man-cave' I came across an analogue computer I made decades ago, when integrated op-amps first became available. It reminded me of what a great tool they were for getting to grips with the ideas of diffentiation and integration, and the power of feedback.
Many of my LB programming tasks have been based on physical modelling using these ideas, so I thought a quick LB update of one of the earliest programs on my site was in order.
Digital computers solve formulae for physics, engineering, sociology, economics, etc which are generally expressions of how something changes over time or distance, by doing large numbers of calculations moving forward tiny steps each time. The numbers stored can be regarded as exact for practical purposes, but the process is slowed by the repetitions even on high speed machinery.
Analogue machines use circuit voltages to represent the quantity whose changing behaviour is to be studied. Since circuits exist which can add, invert, multiply and integrate these voltages over time, they are essentially simple to program and fast in execution BUT you pay for this by having much lower accuracy, typically in the low percentiles. You lso have to worry about settin and resetting initial conditions, and checking scaling of all stages to not exceed practical op-amp output voltages, nor have signals too small and swamped by noise.
My machine in the photo above had only four units, which could operate as summers/inverters/integrators, and a single voltage display- the large meter. The patch board of connectors allowed easy change of the necessary connectors, resistors or capacitors. An external CRO or loudspaker allowed obseervation of higher speed solutions. My inspiration was an early book called 'Analogue Computing at Ultra-high Speed', and I played with things like the wave equation solution for the electron in a hydrogen atom-
or the forced oscillation of a mass-on-a-spring.
As an intermediate creation, an analogue-styled graphic frontend can be created which hides digital processing behind the scenes.
I am modelling a spring supporting a mass. It can have a fixed force applied, and/or impulses or other driving functions. Damping is done by a dash-pot, a piston workig against air friction and removing energy.
Its original position is a static equilibrium. If a force is applied, it will move towards a new position, but overshoot. Depending on the damping, if any, it will oscillate continuously ( no damping) or die away in longer or shorter times. There is a critical amount of damping which makes it settle back, staionary, in minimal time. An extreme amount of damping may make it take centuries to settle to a new position- or milliseconds... You can play with mass, spring constant, damping constant etc, and display as x/t, v/t, a/t or x/v graphs.
It all is explained by Newton's Second Law. In a displaced position, there is a force from the spring trying to return it to equilibriun static position. If moving, there is a rsistance force from air friction tending to reduce the velocity and proportional to it. And outside disturbancs can be added.
If we assume we know acceleration, we can integrate 'a *dt' to get v, and 'v *dt' to get x. So we can use these values, nulti[plied by constants, and add them and get
omegaSquared*x -k *v +F( t)
The really clever thing is that by looping these outputs back to the input we can force the circuit to solve
a =-omegaSquared *x -k *x +F( t)
The example screen below shows me alternating presence and absence of a driver force at a frequency near the natural frequency, thus ramping amplitude up. When I stop, it decays away again. You see both x/t and x/v graphs.