Belemnite battlefields

I have been tinkering around with the code in Pearson's "Generative Art" and came up with this and the image above.

The author's intention was to show how complex patterns could spontaneously emerge from the interaction of multiple components that individually exhibit simple behavior. In this case, a couple of dozen invisible disks track across the display; whenever the disks intersect they cause a circle to suddenly appear at the point of intersection. So a little like the bubble chamber that I'd used in physics lab at university. You don't observe the particle directly, you observe the evidence of its passage through a medium. (My code is here if you want to play around with it.)

But what the images really remind me of are the belemnites that Cynthia and I had tried (and failed) to find in the fossil beds of Dorset and Yorkshire.

Belemnites are the fossilized remains of little squid-like creatures. Most of the times you find individuals—a small one looks a little like a shark's tooth—but sometimes you find a mass of them in a single slab, a "belemnite battlefield".

Source: thefossilforum.com

How do these battlefields occur? I found a paper by Doyle and MacDonald from 1993 in which the authors identify five pathways: post-spawning mortality, catastrophic mass mortality, predation concentration, stratigraphical condensation, and resedimentation. I don't know which occurred in the slab shown above but hey, I now know more potential pathways than I did before I read that paper.

And that's why art is fun. One moment you are messing around with RGB codes to get just the right sandy color for the background of a picture, the next you are researching natural phenomena that until an hour ago you never knew even had a name.

How many times am I going to forget this?

I have been using matplotlib on and off for over 15 years; either using it in its own right (the early days) or as some deeply embedded package whenever I use pandas (last couple years).

So why do I always always forget how to get the damn graphics to display?

Here's an example of plotting from the documentation on pandas.DataFrame.hist:

>>> df = pd.DataFrame({
...     'length': [1.5, 0.5, 1.2, 0.9, 3],
...     'width': [0.7, 0.2, 0.15, 0.2, 1.1]},
...     index= ['pig', 'rabbit', 'duck', 'chicken', 'horse'])
>>> hist = df.hist(bins=3)

When you run that, do you see a nice blue histogram describing the dimensions of farmyard animals? Lucky you. I don't. Because I forgot the magic incantation that needs to go after this.

import matplotlib.pylot as plt
plt.show()

And another thing: this shouldn't work. According to their own documentation, getting matplotlib to operate on OSX—which is what I'm attempting—involves a titanic struggle of framework builds against regular builds, incompatible backends, and bizarre specificity regarding my environment manager (my use of conda apparently means that any perceived irregularities in behaviour are my own damn fault).

OK, I'm done venting. Until the next time I forget the incantation and have to re-learn this.

Random colors again

I had speculated in an earlier post that I could get better (pseudo-)random colors if I pulled from color mixing models like Pantone or perceptual color models like NCS or Munsell. I'd love to, but it turns out that they are all copyright. That's a dead end then.

However my instinct that HSB is probably a better model from which to draw than RGB is supported elsewhere. There's an old post on Martin Ankeri's blog that has an interesting discussion on the issue, and he comes out firmly on the HSB side.

So let's think about how I want to vary hue, saturation, and brightness:

  • Hue: I don't really care about the underlying color so I will just pull from a uniform distribution between 0 and 1. (By the way, I'm assuming HSB are all on a 0-1 scale).

  • Saturation: I know I'm going to want some sort of lumpiness but I'm not sure exactly how much and where. Maybe I want a lot of the colors bleached out. Or really saturated. This suggests that I need should pull randomly from a beta distribution and then play around with the distribution's parameters to change where the lump/lumps occur.

  • Brightness: ditto for brightness. I want it non-uniform but tweakable. Another beta distribution.

Code

I'm now using the python version of Processing instead of Java. It may have some issue around importing third-party libraries but this last week of using the Java version has just reminded me how much I don't like that damn language.

The translation of the Java code to Python was simple; the inclusion of the beta distributions is straightforward too as they are already available in Python's random module.

If you want to play with the code it is in my repository here. If you want to mess around with the paramaters in a beta distributions and see the impact on the shape of the distribution, there's a handly online tool here.

Results

I show a couple of examples below. Personally I think they are an improvement on the earlier examples. (And yes, I've gone full Richter. No gutters here.)

(α, β) saturation = (1.2, 0.9); brightness = (0.8, 0.4)
(α, β) saturation = (1.1, 0.9); brightness = (0.9, 0.6)

Obviously, the colors are different every time I run the code. But it's the overall feel of how these colors work together that interests me. And this overall feel is—again, as one would expect—pretty sensitive with respect to the parameters you set in the beta-distributions for saturation and brightness.

Harlequin Romance Titles: Postscript

Does the little script that I described in yesterday's post, which invents random titles for Harlequin romance novels, actually tell us anything about neural networks? No, it does not. You are not going to get any insight into why you should be using a recurrent network architecture rather than any other; you are not going to get any insight into—well—anything really. Honestly, it is not much more than a "Hello World" program that proves you have successfully installed TensorFlow and Keras. It's a piece of fluff, that's all.

It's a fun piece of fluff though. And I'd still like to read "The Wolf of When".

Randomly generated titles for Harlequin romance novels

You can use recurrent neural networks (RNNs) to generate text: feed one the names of death metal bands (say) and it will start creating it's own. Ditto ice-cream flavors, paint colors, Trump tweets, etc.

One popular package that allows the likes of me to play with this technology was developed by a data scientist, Max Woolf. He's got a great how-to page on his blog. Lifehacker also have an extremely simple and helpful guide that relies on Max's work.

Data

What random text should I generate? I have to feed an RNN a few thousand examples before it can start creating its own (or at least creating text that feels realistic). Fortunately I found a list of all of Harlequin's books from 1949 to 2012; about 4400 titles in total. That is large enough to give the RNN a chance to identify the patterns in the titles, small enough that the RNN will not need to run for hours before giving a result.

Code

You can find the full version of this in my git repository here. The short version is:

  • Install python3, tensorflow, and textgenrnn on your computer.
  • Create "training" and "run" scripts in python. The training script reads all the example data and discovers its underlying patterns, the run script generates new examples.
  • Enjoy!

Results

The RNN's "run" step has a temperature parameter that you can dial up or down betwen zero and one. I think this is something of a misnomer. Lifehacker describe it as a "creativity dial"; I prefer to think of it as a weirdness control. Here are some of the titles that the RNN produced, ordered by temperature.

TEMPERATURE: 0.2
The Man From The Heart
The Baby Bonding
The Wild Sister
The Man In The Bride
The Sun And The Sheikh
The Sheikh's Convenient Wife
TEMPERATURE: 0.4
The Girl At Saltbush Bay
The Bachelor And The Playboy
The Dream And The Sunrancher
The Touch Of The Single Man
The Billionaire Bride
Hunter's Daughter
TEMPERATURE: 0.6
The Wolf Of When
A Savage Sanctuary
The Rancher's Forever Drum
Only My Heart Of Hearts
The Sheikh's Daughter
The Sheriff's Mother Bride
TEMPERATURE: 0.8
Chateau Pland
The Golden Pag
Just Mother And The Candles
Reluctant Paragon
The Unexpected Islands
Expecting the Young Nurse
TEMPERATURE: 1.0
The Girl In A Whirlwind
In Village Touch (Doctor Season)
Rapture Of The Parka
Portrait Of Works!
"Trave Palagry Surrender, Baby"
Bridegroom On Her Secret

So, which of these would you read?

"Random" colors? Trickier than I thought

Here was my first attempt at a simple grid of colored squares. And yes, it is more than a little influenced by Richter's 4900 Colours.

That is odd. I am using the RGB color model—typical in computer graphics—in which each element has 256 levels. I randomly select one of the 256 levels for each of the three dimensions, so I have a space of 2563 possible colors from which to draw. That's a big number (16,777,216 to be precise).

So why do so many of those 25 squares look so similar? Who knew there were so many minty greens?

Maybe it is a function of my color model. Let's try Hue-Saturation-Brightness instead.

I guess it is more visually appealing but I seem to have an awful lot of almost-blacks now. In short: just as bad.

Color models

I simplistically thought "OK, so I might be making some false comparisons here. Who says that an HSB model needs to have 256 levels on each of its components? And is a unit difference equally perceptible irrespective of level, i.e. is the color (0, 0, 0) as distinguishable from (0, 0, 1) as the color (255, 255, 255) is from (255, 255, 254)? Perhaps I just need to choose a different number of levels and I'll get 'better' random colors".

Not so much. A quick Google search reveals a huge rabbit hole down which I could disappear. Even simple questions like "How many colors can we perceive?" give answers ranging from 100,000 to 10,000,000. And there are a bunch of perceptual colour models that I had never heard of before. NCS? Munsell? I never got past Goethe. Maybe I should just find a Pantone color list and draw randomly off that.

This might take a while.

Code

By the way, if you want a copy of the Processing script that produced these images, you can find it here.

Why am I playing with generative art?

Because of this. And because I'm a philistine.

Source: Gerhard Richter, 4900 Colours, #479

It is usually Richter's big smeary abstracts that grab my attention (like this one); I've only recently come across his series 4900 Colours and 1024 Farben. Unfortunately I have only encountered these online (this site has a great analysis of why I should seek one out in person); I think that is why I indulged myself in the response of the philistine: "Huh, I bet I could do that".

Long story short: even creating a small panel with smooth, well-executed, regular patches of color is beyond my abilities (good enough for my living room wall, though). A more immediate and positive effect is that this project got me into generative art.

How so? Well, before committing acrylic to board I wanted to get an idea of what the end product would look like. So why not prototype it? Sure, I could write a python script to knock something like this out, but I don't need to reinvent the wheel when tools like Processing and NodeBox already exist.

My first attempts with NodeBox were a bust. I liked the idea of playing with a GUI that allows you to build a network of nodes and edges that represent series of actions and the flow of their results, but in practice it was just too clunky and I was too impatient. "Why does this node have so many input connections? Why can't I right-click this edge? How the heck do I reseed the random number generator?".

So Processing it was. Processing comes in two flavors: Java and Python. I have been programming Python for years and I really wanted to like that version. But it is just not as mature and complete as the original Java. I guess it is time for me to brush up my Java.

These are early days in my exploration of generative art. I think I am going to kick off with a Richter simulator. Seems simple, right? I shall post images and code when I can.

Photographs I have failed to take

This is not my photograph.

Source: @helengrantsays

I had intended to write about a fun synchronism that I recently encountered. I have been reading Austin Kleon's "Show Your Work!". One chapter—"Open Up Your Cabinet of Curiosities"—talks about sharing your sources of inspiration, the items that pique your interest, the oddities that you find especially engaging. Indeed, the decision to bring back this blog from its three-year hiatus was a direct result of reading that chapter.

Then just a week or so later, I'm with my partner in the Scottish National Gallery of Modern Art and we find ourselves opposite a fine example of such a cabinet: the one that was assembled by the surrealist Roland Penrose. It contains artefacts from the sublime to the silly. A cast of Lee Miller's hands. A glass pig.

"Nice", I think. "I should take a photo of that. But my phone's in the bottom of my bag and there's probably too much reflection and anyway am I getting peckish? Yes I think I am, maybe we should get a cup of tea and a bun and anyway I can probably get a perfectly good picture from the museum's website".

Of course, when I get home, there's no such image to be found. It turns out that the museum is surprisingly parsimonious in the images that it shares. I finally track down a photo: it appears that this cabinet has only been photographed and shared twice; one of these photos is on the blog and instagram feed of Helen Grant. And how about photos of the items in the case? Maybe a photo of the other cast that Penrose commissioned, his foot and Miller's nestled together? No, nothing. Absolutley no trace online.

I hope that I can learn from this. Apparently it's not enough to have the camera with you, you actually need to use it. I could probably apply the same lesson to that almost empty notebook that is weighing down my pocket.

Fingers crossed.

Run Processing from the command line

I have recently started to play with Processing as I explore generative art.

It's a lovely piece of software but, coming out of the box, it does not really fit my typical workflow. The problem? It's an IDE. I can't stand IDEs. Not only do I have to learn a new language but I have to learn a whole new toolkit for working in it. I much much prefer being able to compose a script in my favorite text editor and then summon that script from the command line.

Fortunately Processing supports this workflow. I found this post at dsfcode.com that does a great job of describing how to set it up. And once I had done so it becomes so easy to get into my usual edit/run/re-edit cycle.

The only thing to watch out for is that the order of parameters in the processing-java command matters. So this command succeeds:

$ processing-java --sketch=`pwd`/waveclock/ \
> --output=`pwd`/../outputs/waveclock/ \
> --force --run
Finished

but this does not:

$ processing-java --sketch=`pwd`/waveclock/ \
> --output=`pwd`/../outputs/waveclock/ \
> --run --force
The output folder already exists. Use --force to remove it.

By the way, the sketch that I am running is a slight modification of case study 4.2 in Pearson's "Generative Art" and its output is shown above. My code is here.