Here's my version of a design taken from the Jared Tarbell talk that I referenced earlier.
The recipe is simple:
* Fully connect points that are equispaced around the circumference of a circle
* Repeat for circles of various radii
This exercise posed an interesting challenge. Full details in my repository here, but the short version is that I thought I could take a clever approach using Processing's scale transform and—well—it didn't pan out.
So it was pretty obvious from the symmetry of the pattern that I would want to do my math in polar coordinates and then convert to cartesian at the last moment. And rather than recalculating the points around the edge of the circle whenever I change the radius of the circle, I can leave the points' r and theta (and their equivalent x and y) untouched and use Processing's scale() transform to change the underlying basis vectors and hence the points' position on the screen. Neat! No recalculations!
Unfortunately, scale() scales everything. It's not just that all the basis vectors are scaled, the actual screen representation of a pixel is scaled too. So if I call scale(8) all of a sudden any line (say) that used to be one pixel wide is now 8 pixels wide. And although I can downscale the resulting line widths, I cannot do so below the new representation of 1 pixel. For example, the minimum value that I can pass to strokeWeight() is 1. This is 1 transformed pixel, not 1 actual pixel. So when I use a large value in scale(), my lines suddenly start to get thick and they stay thick. Here's a detail from an early iteration of the design:
The solution is not to use scale(): instead simply recalculate the points
I have a vague childhood memory of visiting a museum and seeing some Victorian contraption of pendulums and pulleys that would produce lovely swirling, swooshing designs from the pen clamped at its center. The designs weren't Lissajous figures, they were far more complex than that. What were they? And what were the devices that produced them?
Sand pendulums? No, I think that was from "Vision On" with Tony Hart. But searching for "sand pendulums" did take me to the archive of the Bridges Organization, which "oversees the annual Bridges conference on mathematical connections in art, music, architecture, education, and culture". OK, I feel I should have known about that and am ashamed that I did not. Moving on…
Googling for examples is much easier now I know what the things are called. It turns out that several artists are working in this space and producing delightful work. Here's an example from Rick Hall (it's also the source of the image at the top of this post):
Now I just need to work out how to build one of these creatures.
Yesterday's post about Mary Wagner's giant spirographs prompted two lines of thought:
Wouldn't it be cool to actually make some of those cogs?
Wouldn't it be cool to prototype them in code?
It didn't take me long to put the first of these on ice. My local maker space has all the necessary kit to convert a sketch on my laptop into something solid but oh my, gears are hard. There seem to be all sorts of ways of getting a pair of gears to grind immovably into one another. I suspect I'd get to explore that space thoroughly before actually getting something to work. I don't have the patience for that. (If you are more patient than me, check out this guide to gear constrution from Make.)
Let's turn to the code then.
There are plenty of spirograph examples out in the wild (the one by Nathan Friend is probably most well known). But if I want to build one of my own, that's going to need some math.
Wikipedia didn't let me down. Apparently the pattern produced by a spirograph is an example of a roulette, the curve that is traced out by a point on a given curve as that curve rolls without slipping on a second fixed curve. In the case of a spirograph we have two roulettes:
A circle rolling without slipping inside a circle (this is a hypotrochoid); and
A circle rolling without slipping outside a circle (this is an epicycloid)
The math is straightforward (here and here) so no real barrier to coding this up, right?
Hmm. As usual one line of enquiry opens up several others. Sure, spirographs look nice but they are rather simple, no? What are our options if we want something more complex but still aesthetically satisfying? Wouldn't it be more fun to play with that? I mean the math couldn't be much harder, right?
I have been tinkering around with the code in Pearson's "Generative Art" and came up with this and the image above.
The author's intention was to show how complex patterns could spontaneously emerge from the interaction of multiple components that individually exhibit simple behavior. In this case, a couple of dozen invisible disks track across the display; whenever the disks intersect they cause a circle to suddenly appear at the point of intersection. So a little like the bubble chamber that I'd used in physics lab at university. You don't observe the particle directly, you observe the evidence of its passage through a medium. (My code is here if you want to play around with it.)
But what the images really remind me of are the belemnites that Cynthia and I had tried (and failed) to find in the fossil beds of Dorset and Yorkshire.
Belemnites are the fossilized remains of little squid-like creatures. Most of the times you find individuals—a small one looks a little like a shark's tooth—but sometimes you find a mass of them in a single slab, a "belemnite battlefield".
How do these battlefields occur? I found a paper by Doyle and MacDonald from 1993 in which the authors identify five pathways: post-spawning mortality, catastrophic mass mortality, predation concentration, stratigraphical condensation, and resedimentation. I don't know which occurred in the slab shown above but hey, I now know more potential pathways than I did before I read that paper.
And that's why art is fun. One moment you are messing around with RGB codes to get just the right sandy color for the background of a picture, the next you are researching natural phenomena that until an hour ago you never knew even had a name.
When you run that, do you see a nice blue histogram describing the dimensions of farmyard animals? Lucky you. I don't. Because I forgot the magic incantation that needs to go after this.
import matplotlib.pylot as plt
And another thing: this shouldn't work. According to their own documentation, getting matplotlib to operate on OSX—which is what I'm attempting—involves a titanic struggle of framework builds against regular builds, incompatible backends, and bizarre specificity regarding my environment manager (my use of conda apparently means that any perceived irregularities in behaviour are my own damn fault).
OK, I'm done venting. Until the next time I forget the incantation and have to re-learn this.
I had speculated in an earlier post that I could get better (pseudo-)random colors if I pulled from color mixing models like Pantone or perceptual color models like NCS or Munsell. I'd love to, but it turns out that they are all copyright. That's a dead end then.
However my instinct that HSB is probably a better model from which to draw than RGB is supported elsewhere. There's an old post on Martin Ankeri's blog that has an interesting discussion on the issue, and he comes out firmly on the HSB side.
So let's think about how I want to vary hue, saturation, and brightness:
Hue: I don't really care about the underlying color so I will just pull from a uniform distribution between 0 and 1. (By the way, I'm assuming HSB are all on a 0-1 scale).
Saturation: I know I'm going to want some sort of lumpiness but I'm not sure exactly how much and where. Maybe I want a lot of the colors bleached out. Or really saturated. This suggests that I need should pull randomly from a beta distribution and then play around with the distribution's parameters to change where the lump/lumps occur.
Brightness: ditto for brightness. I want it non-uniform but tweakable. Another beta distribution.
I'm now using the python version of Processing instead of Java. It may have some issue around importing third-party libraries but this last week of using the Java version has just reminded me how much I don't like that damn language.
The translation of the Java code to Python was simple; the inclusion of the beta distributions is straightforward too as they are already available in Python's random module.
If you want to play with the code it is in my repository here. If you want to mess around with the paramaters in a beta distributions and see the impact on the shape of the distribution, there's a handly online tool here.
I show a couple of examples below. Personally I think they are an improvement on the earlier examples. (And yes, I've gone full Richter. No gutters here.)
Obviously, the colors are different every time I run the code. But it's the overall feel of how these colors work together that interests me. And this overall feel is—again, as one would expect—pretty sensitive with respect to the parameters you set in the beta-distributions for saturation and brightness.
Does the little script that I described in yesterday's post, which invents random titles for Harlequin romance novels, actually tell us anything about neural networks? No, it does not. You are not going to get any insight into why you should be using a recurrent network architecture rather than any other; you are not going to get any insight into—well—anything really. Honestly, it is not much more than a "Hello World" program that proves you have successfully installed TensorFlow and Keras. It's a piece of fluff, that's all.
It's a fun piece of fluff though. And I'd still like to read "The Wolf of When".