More curves; or "how do I google something whose name I don't know?"

I have a vague childhood memory of visiting a museum and seeing some Victorian contraption of pendulums and pulleys that would produce lovely swirling, swooshing designs from the pen clamped at its center. The designs weren't Lissajous figures, they were far more complex than that. What were they? And what were the devices that produced them?

Sand pendulums? No, I think that was from "Vision On" with Tony Hart. But searching for "sand pendulums" did take me to the archive of the Bridges Organization, which "oversees the annual Bridges conference on mathematical connections in art, music, architecture, education, and culture". OK, I feel I should have known about that and am ashamed that I did not. Moving on…

My point of entry into the archive was a paper by Douglas McKenna, "From Lissajous to Pas de Deux to Tattoo: The Graphic Life of a Beautiful Loop". Victory! It mentions the device I was trying to recall. The harmonograph. Yes, those are the patterns I remember, those are the devices.

Googling for examples is much easier now I know what the things are called. It turns out that several artists are working in this space and producing delightful work. Here's an example from Rick Hall (it's also the source of the image at the top of this post):

Source: http://rickhall.co.uk/harmonograph-2/

Now I just need to work out how to build one of these creatures.

Spirograph math

Yesterday's post about Mary Wagner's giant spirographs prompted two lines of thought:

  1. Wouldn't it be cool to actually make some of those cogs?
  2. Wouldn't it be cool to prototype them in code?

It didn't take me long to put the first of these on ice. My local maker space has all the necessary kit to convert a sketch on my laptop into something solid but oh my, gears are hard. There seem to be all sorts of ways of getting a pair of gears to grind immovably into one another. I suspect I'd get to explore that space thoroughly before actually getting something to work. I don't have the patience for that. (If you are more patient than me, check out this guide to gear constrution from Make.)

Let's turn to the code then.

There are plenty of spirograph examples out in the wild (the one by Nathan Friend is probably most well known). But if I want to build one of my own, that's going to need some math.

Wikipedia didn't let me down. Apparently the pattern produced by a spirograph is an example of a roulette, the curve that is traced out by a point on a given curve as that curve rolls without slipping on a second fixed curve. In the case of a spirograph we have two roulettes:

  1. A circle rolling without slipping inside a circle (this is a hypotrochoid); and
  2. A circle rolling without slipping outside a circle (this is an epicycloid)

Here's a hypotrochoid:

Source: https://commons.wikimedia.org/wiki/File:HypotrochoidOutThreeFifths.gif

And here's an epicycloid:

Source: https://commons.wikimedia.org/wiki/File:EpitrochoidOn3-generation.gif

The math is straightforward (here and here) so no real barrier to coding this up, right?

Hmm. As usual one line of enquiry opens up several others. Sure, spirographs look nice but they are rather simple, no? What are our options if we want something more complex but still aesthetically satisfying? Wouldn't it be more fun to play with that? I mean the math couldn't be much harder, right?

More on that tomorrow…

Belemnite battlefields

I have been tinkering around with the code in Pearson's "Generative Art" and came up with this and the image above.

The author's intention was to show how complex patterns could spontaneously emerge from the interaction of multiple components that individually exhibit simple behavior. In this case, a couple of dozen invisible disks track across the display; whenever the disks intersect they cause a circle to suddenly appear at the point of intersection. So a little like the bubble chamber that I'd used in physics lab at university. You don't observe the particle directly, you observe the evidence of its passage through a medium. (My code is here if you want to play around with it.)

But what the images really remind me of are the belemnites that Cynthia and I had tried (and failed) to find in the fossil beds of Dorset and Yorkshire.

Belemnites are the fossilized remains of little squid-like creatures. Most of the times you find individuals—a small one looks a little like a shark's tooth—but sometimes you find a mass of them in a single slab, a "belemnite battlefield".

Source: thefossilforum.com

How do these battlefields occur? I found a paper by Doyle and MacDonald from 1993 in which the authors identify five pathways: post-spawning mortality, catastrophic mass mortality, predation concentration, stratigraphical condensation, and resedimentation. I don't know which occurred in the slab shown above but hey, I now know more potential pathways than I did before I read that paper.

And that's why art is fun. One moment you are messing around with RGB codes to get just the right sandy color for the background of a picture, the next you are researching natural phenomena that until an hour ago you never knew even had a name.

Random colors again

I had speculated in an earlier post that I could get better (pseudo-)random colors if I pulled from color mixing models like Pantone or perceptual color models like NCS or Munsell. I'd love to, but it turns out that they are all copyright. That's a dead end then.

However my instinct that HSB is probably a better model from which to draw than RGB is supported elsewhere. There's an old post on Martin Ankeri's blog that has an interesting discussion on the issue, and he comes out firmly on the HSB side.

So let's think about how I want to vary hue, saturation, and brightness:

  • Hue: I don't really care about the underlying color so I will just pull from a uniform distribution between 0 and 1. (By the way, I'm assuming HSB are all on a 0-1 scale).

  • Saturation: I know I'm going to want some sort of lumpiness but I'm not sure exactly how much and where. Maybe I want a lot of the colors bleached out. Or really saturated. This suggests that I need should pull randomly from a beta distribution and then play around with the distribution's parameters to change where the lump/lumps occur.

  • Brightness: ditto for brightness. I want it non-uniform but tweakable. Another beta distribution.

Code

I'm now using the python version of Processing instead of Java. It may have some issue around importing third-party libraries but this last week of using the Java version has just reminded me how much I don't like that damn language.

The translation of the Java code to Python was simple; the inclusion of the beta distributions is straightforward too as they are already available in Python's random module.

If you want to play with the code it is in my repository here. If you want to mess around with the paramaters in a beta distributions and see the impact on the shape of the distribution, there's a handly online tool here.

Results

I show a couple of examples below. Personally I think they are an improvement on the earlier examples. (And yes, I've gone full Richter. No gutters here.)

(α, β) saturation = (1.2, 0.9); brightness = (0.8, 0.4)
(α, β) saturation = (1.1, 0.9); brightness = (0.9, 0.6)

Obviously, the colors are different every time I run the code. But it's the overall feel of how these colors work together that interests me. And this overall feel is—again, as one would expect—pretty sensitive with respect to the parameters you set in the beta-distributions for saturation and brightness.

"Random" colors? Trickier than I thought

Here was my first attempt at a simple grid of colored squares. And yes, it is more than a little influenced by Richter's 4900 Colours.

That is odd. I am using the RGB color model—typical in computer graphics—in which each element has 256 levels. I randomly select one of the 256 levels for each of the three dimensions, so I have a space of 2563 possible colors from which to draw. That's a big number (16,777,216 to be precise).

So why do so many of those 25 squares look so similar? Who knew there were so many minty greens?

Maybe it is a function of my color model. Let's try Hue-Saturation-Brightness instead.

I guess it is more visually appealing but I seem to have an awful lot of almost-blacks now. In short: just as bad.

Color models

I simplistically thought "OK, so I might be making some false comparisons here. Who says that an HSB model needs to have 256 levels on each of its components? And is a unit difference equally perceptible irrespective of level, i.e. is the color (0, 0, 0) as distinguishable from (0, 0, 1) as the color (255, 255, 255) is from (255, 255, 254)? Perhaps I just need to choose a different number of levels and I'll get 'better' random colors".

Not so much. A quick Google search reveals a huge rabbit hole down which I could disappear. Even simple questions like "How many colors can we perceive?" give answers ranging from 100,000 to 10,000,000. And there are a bunch of perceptual colour models that I had never heard of before. NCS? Munsell? I never got past Goethe. Maybe I should just find a Pantone color list and draw randomly off that.

This might take a while.

Code

By the way, if you want a copy of the Processing script that produced these images, you can find it here.

Why am I playing with generative art?

Because of this. And because I'm a philistine.

Source: Gerhard Richter, 4900 Colours, #479

It is usually Richter's big smeary abstracts that grab my attention (like this one); I've only recently come across his series 4900 Colours and 1024 Farben. Unfortunately I have only encountered these online (this site has a great analysis of why I should seek one out in person); I think that is why I indulged myself in the response of the philistine: "Huh, I bet I could do that".

Long story short: even creating a small panel with smooth, well-executed, regular patches of color is beyond my abilities (good enough for my living room wall, though). A more immediate and positive effect is that this project got me into generative art.

How so? Well, before committing acrylic to board I wanted to get an idea of what the end product would look like. So why not prototype it? Sure, I could write a python script to knock something like this out, but I don't need to reinvent the wheel when tools like Processing and NodeBox already exist.

My first attempts with NodeBox were a bust. I liked the idea of playing with a GUI that allows you to build a network of nodes and edges that represent series of actions and the flow of their results, but in practice it was just too clunky and I was too impatient. "Why does this node have so many input connections? Why can't I right-click this edge? How the heck do I reseed the random number generator?".

So Processing it was. Processing comes in two flavors: Java and Python. I have been programming Python for years and I really wanted to like that version. But it is just not as mature and complete as the original Java. I guess it is time for me to brush up my Java.

These are early days in my exploration of generative art. I think I am going to kick off with a Richter simulator. Seems simple, right? I shall post images and code when I can.

Photographs I have failed to take

This is not my photograph.

Source: @helengrantsays

I had intended to write about a fun synchronism that I recently encountered. I have been reading Austin Kleon's "Show Your Work!". One chapter—"Open Up Your Cabinet of Curiosities"—talks about sharing your sources of inspiration, the items that pique your interest, the oddities that you find especially engaging. Indeed, the decision to bring back this blog from its three-year hiatus was a direct result of reading that chapter.

Then just a week or so later, I'm with my partner in the Scottish National Gallery of Modern Art and we find ourselves opposite a fine example of such a cabinet: the one that was assembled by the surrealist Roland Penrose. It contains artefacts from the sublime to the silly. A cast of Lee Miller's hands. A glass pig.

"Nice", I think. "I should take a photo of that. But my phone's in the bottom of my bag and there's probably too much reflection and anyway am I getting peckish? Yes I think I am, maybe we should get a cup of tea and a bun and anyway I can probably get a perfectly good picture from the museum's website".

Of course, when I get home, there's no such image to be found. It turns out that the museum is surprisingly parsimonious in the images that it shares. I finally track down a photo: it appears that this cabinet has only been photographed and shared twice; one of these photos is on the blog and instagram feed of Helen Grant. And how about photos of the items in the case? Maybe a photo of the other cast that Penrose commissioned, his foot and Miller's nestled together? No, nothing. Absolutley no trace online.

I hope that I can learn from this. Apparently it's not enough to have the camera with you, you actually need to use it. I could probably apply the same lesson to that almost empty notebook that is weighing down my pocket.

Fingers crossed.

Run Processing from the command line

I have recently started to play with Processing as I explore generative art.

It's a lovely piece of software but, coming out of the box, it does not really fit my typical workflow. The problem? It's an IDE. I can't stand IDEs. Not only do I have to learn a new language but I have to learn a whole new toolkit for working in it. I much much prefer being able to compose a script in my favorite text editor and then summon that script from the command line.

Fortunately Processing supports this workflow. I found this post at dsfcode.com that does a great job of describing how to set it up. And once I had done so it becomes so easy to get into my usual edit/run/re-edit cycle.

The only thing to watch out for is that the order of parameters in the processing-java command matters. So this command succeeds:

$ processing-java --sketch=`pwd`/waveclock/ \
> --output=`pwd`/../outputs/waveclock/ \
> --force --run
Finished

but this does not:

$ processing-java --sketch=`pwd`/waveclock/ \
> --output=`pwd`/../outputs/waveclock/ \
> --run --force
The output folder already exists. Use --force to remove it.

By the way, the sketch that I am running is a slight modification of case study 4.2 in Pearson's "Generative Art" and its output is shown above. My code is here.