Bottom-Up Art

How to draw real-world objects without meaning to

Accidental psychedelic donkey face

It’s not just a cool band name (or ship name). It happened to me once! After days of doodling, I’d filled a page in my middle school notebook with detailed abstract designs. A line here, some dots there, a nice sweeping curve, etc. Then a friend walked by and asked, “Is that a donkey?” After denying this absurdity, I held my handiwork at arms’ length and...

Not only was it the face of a donkey, it was a better donkey than I’d have drawn intentionally. It boldly leapt from the page, and all who saw it instantly accused me of drawing under the influence of LSD. To top it all off, you could flip the damn thing upside down, and it was a different damn donkey.

How on earth did I accidentally draw a reversible psychedelic donkey face?

Well, it’s actually not as unlikely as it sounds.

  1. It was sure to be psychedelic-looking, because it was chocked full of weird little patterns like dots instead of photorealistic features like shaded surfaces.

  2. My intuitions about good-looking lines and patterns depend partly on everyday objects, including animals and plants. It’s hardly surprising that biology-inspired lines might accidentally produce something biological.

  3. I could’ve easily ended up drawing a dog, cat, or any other object. They’d all be as impressive as the donkey.

  4. It’s hard to draw an abstract design that doesn’t look like anything. Stare at a cloud long enough, and it’ll look like something.

So, perhaps the sudden appearance of a donkey is not so mysterious after all.


Three ways of working

The lesson here is that you can work in several ways:

  1. Start with a subject like a person or mountain. Then, choose how to represent it using colors, lines, etc. (This is called representational art.)

  2. Start with a graphical idea, like a shape or symmetry. Explore it, but avoid representing any real-world objects. (This is called abstract art.)

  3. Start with a graphical idea. Then, as the piece develops, see what object it might look like, and try to bring that out. Or, after the piece is complete, see what it looks like, and give it a title to shape how people see it. (Perhaps I could call this semi-abstract art, or bottom-up representational art.)

Here’s a piece I drew (and then post-processed and colorized using Prisma). Like the psychedelic donkey, it was drawn using method 3. Does it look like anything?

I gave it the title Cliffs of Dover. Does that change your perspective? Here’s one more:

I call it Circus. It’s got color and it’s got tents - such is my defense.

From love-sickness to line-thickness, every artwork depends on a wide range of high and low-level ideas. Use them how you like!

An Optimistic Rock Symphony

Ideas for a progressive rock album based on The Beginning of Infinity

The Beginning of Infinity had a mind-wide (and mind-widening) impact on me.

As I’m constantly writing new music, and looking for new ways of writing music, I wonder: could I write an album based on the book, with one piece per chapter?

I think so! As a test, I’ll take the first few chapters, summarize their main points, then describe a way to represent them in music.

(By the way, the Swedish pianist and composer Ernst Erlanson - a good friend of mine - also did something like this.)

Introduction

  • Progress is real: Grow in speed, volume, complexity, and joyfulness throughout.

  • Progress has a necessary beginning: Start with a unique event - heard only once.

  • Progress need never end: On several occasions, appear to end, but resume again.

1. The Reach of Explanations

Good explanations can apply to situations far beyond where they were created. Einstein’s theory of general relativity was discovered on Earth, but applies to the entire universe.

Start with a memorable melody, then repeat it again and again while changing everything around it. The rhythm, tempo, genre, harmony, etc. Show that the melody works in many situations.

2. Closer to Reality

We observe nothing directly. In fact, we often get a better picture of reality when our observations are less direct - when things like telescopes, theories, and computer programs lie between our eyes and distant galaxies.

  • Start with melody to represent a physical phenomenon.

  • Then a new melody to represent observation/interpretation.

  • Then an imperfect copy of first melody, to represent the errors of interpretation.

  • Repeat this group of three, each time keeping the first melody the same, but lengthening the second, and making the third a more faithful copy of the first.

This represents how adding layers of interpretation can improve our understanding of what is out there. With each repetition, it’s as if we’re using larger, better telescopes and seeing the stars ever more clearly.

3. The Spark

“Like an explosive awaiting a spark, unimaginably numerous environments in the universe are waiting out there, for aeons on end, doing nothing at all… Almost any of them would, if the right knowledge ever reached it, instantly and irrevocably burst into a radically different type of physical activity: intense knowledge-creation… transforming that environment from what is typical today into what could become typical in the future. If we want to, we could be that spark.”

Divide the piece (or sections within it) in two. The first half is simple, subdued, and mostly unchanging. Maybe ethereal, like Neptune, from Holst’s The Planets. The second half starts with the sudden arrival of small-but-clear theme. Then it rapidly explodes in power and complexity.


I’ll stop there for now. The book has almost twenty chapters, and most of them could be an album of their own!

By the way, here’s a piece I wrote that was inspired by related ideas:

Xenotypography 101

Studying the writing of fictional alien civilizations

If aliens exist, what fonts do they use? To be fair, I don’t really care. But, asking the question allows me to call myself a xenotypographer (a term which has satisfyingly few Google results). That’s historic. It’s courageous, really.

Seriously, though. I’ve done some work on this. First, here is a mug.

Click it to see for yourself, but that mug - and Mars - rest on the desk of Elon Musk. The red symbols are from a fictional language called Marain (not Martian) in Iain Banks’s Culture novels. I think they look cool. Elon Musk thinks they look cool. You think they look cool. We all agree. They’re cool. But I actually did something about it! Again, I’m brave.

What to do? Why not generate all possible symbols in this language? But how?

Well, the mug’s symbols all fall within a template: a plus sign and an X, all wrapped in a square (see below, left). Incidentally, The Skyward Lament, as I like to call it, can also be created from this template (see below, right).

It's amazing how many images, and how many emotions, one can conjure from this template. Take a look.

Anyway, how many possible symbols are there? Well, again, the template is a plus sign, an X, and a square. The plus sign has four line segments, the X has four, and the square has eight (two on top, two on bottom, etc.) So, that’s a total of 4 + 4 + 8 = 16 line segments. Each one can be present or absent. That’s two possibilities each. So, every character could be represented by 16 bits of information, with each bit specifying whether a particular line segment is present or absent. That leaves us with 2^16 = 65,536 possible characters. About 30% of these are either easy to confuse with each other or can't be written without lifting your pen. That leaves us with around 46,000 characters. Here are 328 of them:

They all consist of two polygons. In fact, this is an exhaustive list of all characters made up of two polygons. Here are a bunch of characters with arrows pointing to the top-right:

There are all sorts of interesting groups of characters.

But, to count as xenotypography, we’ve got to turn our attention not only to the symbols, but to the fonts! Here’s an elegant, thin-lined one:

And a bold one:

Starting only as Musk’s mug’s markings, Marain’s managed to maintain a miraculous, multi-year magnetism over my mind, and I’ve only shown you the half of it here.

7: The Selfish Gene

Genes and memes. Breeding and beheading. Penises and poo-eaters.

How does something as complex as a horse, or even an amoeba, come into existence? Why are there so many kinds of organisms, and why are some suspiciously similar? These questions were answered by Darwin, and new ones sprung up after him: if complex organisms are the product of variation and selection, how does the variation work? How does the selection work? What is being varied and selected? Individual bodies? Groups, like families and species? Something else entirely? These questions (and many more) are the subject of The Selfish Gene, by Richard Dawkins.

The answers all flow from one idea: genes are the subject of variation and selection. Not individuals. Not groups. One of the most important consequences of this view is that genes often prosper at the expense of individuals and groups. For instance, a male might live longer if it never fought for mating opportunities. But, a gene for safe celibacy dies with its holder. Genes for dangerous promiscuity don’t - they spread. Similarly, if a gene helps one individual reproduce more than its competitors, it will spread, even if it harms the species as a whole (e.g. by reducing its population).

But what do I care? I’m interested in how the human mind works, and how to build one!

Well, though Darwin just wanted to explain the creation of complex organisms, he accidentally discovered something far more general: a theory that explains the creation of all knowledge, not just genetic knowledge (e.g. like how to build reliable anuses). This was made clear by the philosopher Karl Popper, who developed an evolutionary theory of knowledge. He argued knowledge is created by conjecture and criticism - variation and selection.

Evolution, then, isn’t really about biology. Like the mind, it’s about knowledge-creation.

This is why I’m interested in evolution in all its forms, and why I decided to read The Selfish Gene. By learning about biological evolution, I hoped to learn something about how human ideas evolve. I think I succeeded, and I learned lots more besides. In fact, here’s a Twitter thread full of interesting excerpts from the book, and thoughts I had while reading it. (Btw, make sure to read the out-of-context quotes like, “...sawing off heads is a bit of a chore.”)

6: A Few Questions on AGI

Popperian algorithms, evolvability, explanations, and more.

The fun of research is as much in the questions as the answers, so I figured I’d share some of my latest ones!

  1. What are the limits of biological evolution?

  2. What makes explanatory knowledge special?

  3. How can Popperian epistemology improve narrow AI algorithms?

  4. What are the different kinds of conflicts between ideas?

  5. Why is the brain a network of neurons?

  6. How does the low-level behavior of neurons give rise to high-level information processing?


1. What are the limits of biological evolution?

Could evolution ever produce a wheel-and-axle? Perhaps such inventions are “all or nothing”, and are therefore impossible to create in a sequence of tiny, trivial genetic mutations. It’s a counterintuitive fact that evolution can produce stupendous complexity (if you doubt this, then try 3-D printing a hippo…), but it cannot produce even a simple thing if it is too different from what already exists. Evolution is a bit like a mountain climber that can go high as the sky, but never jump a gap. (By the way, I suspect today’s neural networks have similar limitations.)

This question about limits is interesting because there is something special about humans - something we can do that evolution can’t. But what? While we know something of the answer, our knowledge is vague. Creating artificial general intelligence may depend on a more precise understanding of the boundary between the capabilities of human minds and biological evolution.

One could call this the question of evolvability - what can be evolved and what can’t be? Incidentally, this question - about what is possible and what is impossible - has the same form as statements in constructor theory, which is about what physical transformations are possible and impossible, and why. Perhaps the limits of evolution can be expressed in constructor theory, or illuminated by ideas from the constructor theories of information, life, and thermodynamics.

2. What makes explanatory knowledge special?

In his 2012 article, Creative Blocks, David Deutsch argues that “the ability to create new explanations is the unique, morally and intellectually significant functionality of people (humans and AGIs)…” He elaborates on this in The Beginning of Infinity and his TED talk on explanation, but we have a long way to go before our understanding is sufficient to create a computer program capable of explaining anything.

In a recent post, I asked:

Explanations are about what is objectively true rather than only what is useful. What are the consequences of this? What makes this sort of knowledge more powerful, and indeed more useful, than other kinds? What is different about its structure? What mechanisms of variation and selection are required to create this sort of knowledge? What is so difficult about creating it? (After all, it’s an extremely recent innovation in the history of life on earth.)

In my last post, True vs. Useful, I tried to explore one feature of explanations that makes them especially powerful:

In the end, the search for truth entails the pursuit of logical consistency among all our ideas, and thus takes advantage of all our knowledge - not just a single, fixed idea. It subjects our ideas to a powerful form of selection - logical contradiction - not found in biological or machine learning systems. Most importantly, it provides a combinatorial explosion of opportunities for conflict - and thus for progress.

On a different note, what makes explanations so hard to create? One idea, as Deutsch argues, is that good explanations are “hard to vary,” meaning that most modifications not only make them worse, but completely non-functional - unable to explain anything at all. This makes them hard to reach in a way much like “all or nothing” ideas like the wheel-and-axle.

To visualize this property, imagine a vast cube representing the space of all ideas, where the best - the most true and useful - ideas are bright points of light while the worst ideas are invisible. In this space, good explanations are solitary, bright points. They are rare, sparse, and disconnected from other bright regions. On the other hand, useful rules of thumb exist as fuzzier clouds of points, for similar rules of thumb are about as good as one another, and so the nearest points to a given rule of thumb are about equally bright. Similarly, genetic knowledge forms a fuzzy tree without any gaps - for any two points in the tree, you can get from one to the other by following a path of bright points.

Now imagine you’re trying to navigate this space to improve your ideas, but can only see a short distance. If you find yourself in a fuzzy cloud or tree, you can look around to nearby points to see if any are better and brighter, and move to it. By contrast, finding a good explanation is far more difficult, for it is hidden in a vast space, and you might pass within a short distance of it without ever seeing it. Stumbling and looking around isn’t enough. You need something more like high-powered telescopes and ultra-accurate, long-distance teleportation in the space of ideas. How do we do that?

3. How can Popperian epistemology improve narrow AI algorithms?

In his 2012 article, Creative Blocks, David Deutsch argues Popper’s work on epistemology is key to building artificial general intelligence. I think it may also inspire unique advances in narrow artificial intelligence algorithms (which, despite their lack of generality, can still be tremendously useful). After all, Popper’s work applies to the creation of knowledge in all its forms, from biological evolution to human minds - and narrow AI.

Also, the whole point of AGI is to write a program that is a mind, so the earlier one can apply and test one’s theoretical ideas (by programming them), the better. Such tests are bound to uncover all manner of subtle theoretical issues that would otherwise go unnoticed. That’s how programming usually goes - getting one’s ideas to work in practice is harder than anticipated, and leads to a far better understanding of things.

One idea is to apply the conclusions of True vs. Useful, and focus on solving constraints rather than maximizing performance. For a visual metaphor, it’s like trying to get puzzle pieces to fit together rather than walking to the top of a hill. While this approach has a long history (e.g. logic programming in Prolog), logic-based approaches to artificial intelligence have been mostly unsuccessful. They are brittle and full of precise statements, while human knowledge is flexible and full of fuzzy statements. So, there are unsolved problems here, and perhaps deep learning and Popperian ideas can help address them.

For one thing, I think a common mistake in the history (and present) of AGI research is to take a particular cognitive tool like logic, language, or analogy and suppose it is the core of intelligence. As Popper explained, variation and selection are at the core, and other things just provide specific (and often tremendously useful) mechanisms of variation and selection. Perhaps taking this seriously will help solve the problem of how to use logic in artificial intelligence - both narrow and general.

4. What are the different kinds of conflicts between ideas?

A key part of understanding minds is understanding how ideas interact within them. How do these interactions lead to the variation and selection required for the evolution of knowledge? How are they combined and altered to form new ideas? How do they exert selection pressure on each other? What kinds of interactions spark the search for new ideas?

As I argue in True vs. Useful, logical contradiction offers one example of how ideas can interact, but a subtler example might be when you are surprised by something. In this case, it’s unlikely there’s any explicit logical contradiction at work, but there is still a conflict of ideas. If you are surprised upon entering an elevator containing three goats and a glowing block of uranium, you are experiencing a conflict between what you expected to see and what you are actually seeing.

Something similar must be happening when you find something interesting. Here, the conflict is even more subtle, though, and I don’t quite understand it. Presumably an idea appears interesting if it seems both novel and relevant to problems that one cares about. Given the vagueness of such a statement, it can no doubt be improved upon by trying to program it, as I mentioned earlier.

At any rate, the interactions between ideas are fundamentally important, conflict is one key example, and it exists in many different forms which have evolved for different purposes, like finding things dangerous, desirable, interesting, and surprising.

5. Why is the brain a network of neurons?

There are many ways to build a computer, and the only fundamental requirement is that it be Turing-complete. While brains and modern processors both satisfy this requirement, one does it with a network of neurons and the other with a von Neumann architecture.

Why the difference? Presumably because brains had to be evolvable while modern computers could be designed (see question #1 above). A network of neurons can start small and grow larger in the course of evolution and be useful at each stage. In contrast, modern computers are like the wheel-and-axle. They’re all-or-nothing. If one part of the system breaks, or has yet to be created, then it is useless.

Setting aside the question of evolvability, though, should minds be made of networks of neurons (or simulations of them)? While the way a computer is built doesn’t affect what it can do in principle, it does affect what it can do in practice. After all, the integrated circuits in modern computers are millions of times faster than their vacuum tube predecessors. Moreover, there are different algorithms for doing the same thing, and they can be wildly different in their speed and memory usage. Perhaps the network structure of the brain indicates that minds depend for their efficiency on concurrent, distributed, networked computation.

For example, consider how efficiently the brain can search its memory in the course of a conversation. I’ve sometimes wondered how it was that I recalled a perfectly-apt anecdote despite having not thought of it for years. Evidently, it was stored in such a way that, under the right circumstances, it could quickly and easily be found and shared. That is not a trivial feat, given the vast collection of memories in a mind. For instance, it would be far too slow to go through all one’s memories one by one. By the time you’d found a good story, the conversation would be over! The network structure of the brain seems to handle the problem with ease, though. The general picture (as in deep learning) is that a situation “activates” some neurons, which in turn activates others which are connected, and this cascade of activity can eventually activate a region of the brain associated with some long-dormant anecdote that’s perfectly relevant to the conversation one is having.

So, if that’s one example of the efficiency and practical value of network-based computation, what are others?

6. How does the low-level behavior of neurons give rise to high-level information processing?

Human brains must be Turing-complete (after all, they came up with the idea of Turing-completeness!) but how does one build a universal Turing machine from neurons and their connections? This is an active area of research (here’s one potential explanation).

More generally, for any given computation, how can it be expressed in terms of the behavior of neurons? The same question exists for modern computers, too, but instead of expressing things in terms of neurons, one uses the low-level instructions which a computer processor offers. Historically, engineers hand-coded these low-level instructions, then developed a slightly higher-level language to make things easier. They could write code in this language, and it would be translated, or compiled, into the relevant low-level instructions. Later, other languages were built on top of that language. This process has continued, and now modern programmers can express high-level ideas easily and then compile them into the instructions which a processor can understand and execute. Perhaps a similar process happened historically with brains, and can be used in any network-based AGI computer we wish to build.

Loading more posts…