Random Attractors – Found using Lyapunov Exponents (2001)
paulbourke.net129 points by cs702 a day ago
129 points by cs702 a day ago
There's a book covering this and more from 1993 called "Strange Attractors: Creating Patterns in Chaos" by Julian C. Sprott that's freely available here: https://sprott.physics.wisc.edu/SA.HTM
It's fun (errr... for me at least) to translate the ancient basic code into a modern implementation and play around.
The article mentions that it's interesting how the 2d functions can look 3d. That's definitely true. But, there's also no reason why you can't just add on however many dimensions you want and get real many-dimensioned structures with which you can noodle around with visualizations and animations.
As an undergraduate I worked with some other Physics students to construct an analog circuit using op amps that modeled one of Sprott’s equations and we confirmed experimentally that the system exhibited chaotic behavior. We also used a transconductance amplifier as a control parameter and swept through the different states (chaotic, period windows) of the circuit. We did not go as far as comparing the experimental and predicted period windows while I was there but it was an interesting project for us. At one point I turned up an article in Physica D describing how to calculate the first Lyapunov exponent using small data sets which we used to compute whether we were in a period window or not.
This is how I envision LLMs working to some extent. As in that the "logic paths" follow something like this where the markov-chain-esque probabilities jump around the vector space. It reminds me that to get the answer I want that I need to setup the prompt to get me near the right "attractor logic" pathway. Once in a close enough ballpark then they'll bounce to the right path.
As a counter, I found that if you add an incorrect statement or fact that lies completely outside the realm of the logic-attractor for a given topic that the output is severally degraded. Well more like a statement or fact that's "orthogonal" to the logic-attractor for a topic. Very much as if it's struggling to stay on the logic-attractor path but the outlier fact causes it to stray.
Sometimes less is more.
Interesting. Nothing prohibits us from thinking of pretrained LLMs as dynamical systems that take a token state and compute an updated token state: x_{n+1} = LLM(x_n), starting from an initial token state x_0. Surely we can compute trajectories (without sampling, for determinism) and study whether LLMs exhibit chaotic behavior. I don't think I've seen research along those lines before. Has anyone here?
Looks like @cs702 [1] posted a related link where a NN follows an attractor pattern!
I've only skimmed it but it very much looks like what I've been imagining. It'd be cool to see more research into this area.
1: https://news.ycombinator.com/item?id=45427778 2: https://towardsdatascience.com/attractors-in-neural-network-...
Is anyone doing anything besides visualizations with this chaos stuff? I liked the article linked below depicting the state space of artificial neurons: https://towardsdatascience.com/attractors-in-neural-network-...
Please take a look at the most recent draft of my book "Hidden Markov Models and Dynamical Systems" https://www.fraserphysics.com/book.pdf In the first chapter I talk about a chaotic model for laser dynamics, and in the last chapter I use the same ideas to analyze ECGs.
The code and text are at https://gitlab.com/fraserphysics/hmmds. From a Nix command line, "make book" builds the book in about 10 hours.
I'd be grateful for any feedback on the book or the software.
Well, engineers building physical systems like airplanes and rockets use Lyapunov exponents to avoid chaotic behavior. No one sane wants airplanes or rockets that exhibit chaotic aerodynamics!
Has progress stalled in this area? I don't know, but surely there are people working on it. In fact I recently saw an interesting post on HN about a new technique that among other things enables faster estimation of Lyapunov exponents: https://news.ycombinator.com/item?id=45374706 (search for "Lyapunov" on the github page).
Just because we haven't seen much progress, doesn't mean we won't see more. Progress never happens on a predictable schedule.
To add to this, a moderate amount of turbulence (a type of chaotic fluid flow) in engines and wing surfaces is sometimes deliberately engineered to improve combustion efficiency and lift, and also chaotic flow can induce better mixing in heat exchangers and microfluidics systems.
Absolutely!
These techniques are the key unlocks to robustifying AI and creating certifiable trust in their behavior.
Starting with pre-deep neural network era stuff like LQR-RRT trees, to the hot topic today of contraction theory, and control barrier certificates in autonomous vehicles
Chaos is an important part of Control Systems theory from what I understand.
Chaos theory > Applications: https://en.wikipedia.org/wiki/Chaos_theory#Applications
People use chaos theory to make predictions about attractor systems that have lower error than other models.
Not really. Fractals and chaos theory were a bit like blockchain in that it was a "new kind of science" and it was supposed to explain everything, and you could buy pop-science books talking about the implications.
And then it sort of fizzled out, because while it's interesting and gives us a bit of additional philosophical insights into certain problems, it doesn't do anything especially useful. You can use it to draw cool space-filling shapes.
No. Yes to the overhype, no to the "not really".
I don't see how better understanding non-linear systems and global dynamics can be not be considered useful. For starters, better control of nonlinear systems/keeping them from turning them chaotic is incredibly useful. So many hard problems can be approximately reduced to "keep this non-linear system stable." Staying in the "edge of chaos" regime has proven to be an optimal choice for a plethora of problems.
I think that's a bit of a scope creep. The study of dynamical systems is obviously important and is sometimes rolled into chaos theory, but it predates it - and tellingly, it almost never concerns itself with chaotic behavior, because you can't do a whole lot with that.
So it's sort of like saying that the physics of black holes are very useful to us day-to-day because we want to make sure we don't fall into any black holes.
I'm not saying that chaos theory isn't interesting. It's just that it's pretty hard to find any concrete application of it, beyond hand-wavy stuff like "oh, it somehow helped us understand weather".
You can do a lot with chaos. One of the things it lets you do is find an unforced trajectory from the vicinity of any state to the vicinity of any other (accessible) state. Sensitivity to initial conditions means sensitivity to perturbations, which also means sensitivity to small control inputs, and this can be leveraged to your advantage.
Multibody orbits are one such chaotic system, which means you can take advantage of that chaos to redirect your space probe from one orbit to another using virtually zero fuel, as NASA did with its ISEE-3 spacecraft.
I don’t think you’re remotely correct, but I also don’t know how to dispute your ignorance in any useful way.
To @esafak I suggest following @westurner’s post.
I like the concept of Stable Manifolds. Classifying types of them is interesting. Group symmetries on the phase space are interesting. Explaining this and more is not work I’m prepared to do here. Use Wikipedia, ask ChatGPT, enrol in a course on Chaos and Fractal Dynamics, etc.
I am quite familiar with this space and I will reassert that its by far most significant application is making pretty pictures.
The Wikipedia list you're indirectly referencing is basically a fantasy wishlist of the areas where we expected the chaos theory to revolutionize things, with little to show for it. "Chaos theory cryptography", come on.
https://paulbourke.net/fractals/lyapunov/
> It may diverge to infinity, for the range (+- 2) used here for each parameter this is the most likely event. These are also easy to detect and discard, indeed they need to be in order to avoid numerical errors.
https://superliminal.com/fractals/bbrot/
The above image shows the overall entire Buddhabrot object. To produce the image only requires some very simple modifications to the traditional mandelbrot rendering technique: Instead of selecting initial points on the real-complex plane one for each pixel, initial points are selected randomly from the image region or larger as needed. Then, each initial point is iterated using the standard mandelbrot function in order to first test whether it escapes from the region near the origin or not. Only those that do escape are then re-iterated in a second, pass. (The ones that don't escape - I.E. which are believed to be within the Mandelbrot Set - are ignored). During re-iteration, I increment a counter for each pixel that it lands on before eventually exiting. Every so often, the current array of "hit counts" is output as a grayscale image. Eventually, successive images barely differ from each other, ultimately converging on the one above.
Is it possible to use the Buddhabrot technique on the lyapunov fractals ?
Seems to me that the images on Bourke's site _are_ produced using the general "Buddhabrot" technique (splatting points onto an image). Although each image appears to only represent a single orbit sequence and the reject condition is inverted so that only stable orbits are shown.
I've personally found the technique very versatile and have had a lot of fun playing around with it and exploring different variations. Was excited enough about the whole thing that I created a website for sharing some of my explorations: https://www.fractal4d.net/ (shameless self-advertisement)
With the exception of some Mandelbrot-style images all the rest are produced by splatting complex-valued orbit points onto an image in one way or another.
Back when I had all the time in the world I made this https://attractors.ronvalstar.nl/ I never could quite get the hang of Lyapunov exponents though.
Similar, post on the Henon Attractor 4h ago: https://news.ycombinator.com/item?id=45424223
Also, from that page: https://towardsdatascience.com/attractors-in-neural-network-...
These visualizations are beautiful. I'm a musician at heart so I really geek out about bifurcation maps. You get to see the exquisite relationship between chaos and form. It's like nature and math producing visual jazz. Thanks for a kick ass addition cs702!