Nobody’s Going to Solve the Universe

First things first: corrections and clarifications.

It was brought to my attention that some of the language in my recent attempt at being slightly less lazy and cowardly may come across as fatalistic, or over-emphatic of the importance of ‘talent’. That was not the intention, although it is certainly not an unreasonable interpretation of lines such as:

“His brain is so much better suited to the task that the gulf is effectively impassable.”

And:

“…de facto hard ceilings do exist on an individual’s ability to draw ‘well’ in a given context.”

The overall sense that I wanted to convey with that post (and that series) was some combination of the following:

A: It’s fine not to be good at things.

B: Not being good at things doesn’t mean that you can’t have fun with those things.

C: Not being good at things doesn’t mean that you can’t become good at those things.

D: Things at which you are not good are an excellent sort of thing to do if you want to improve in general.

And perhaps I did a bad job at that.

The reason talent, as a concept, came up is really just because I found it interesting and got side-tracked. I do think it’s worth acknowledging, however – not as a counterpoint to the narrative of self-improvability and embracing failure, but as a little bit of extra seasoning. Well, that’s a topic for a different day. I just thought it was important to address any potential miscommunication. Writing is nothing if not an attempt to communicate, after all. Simply being ignored may be a poor way to fulfil that function (although it is kind of an inevitability), but giving the wrong impression is far worse.

That’s part of what’s attractive about fiction, as a medium. Your ideas are couched in a much more overt layer of abstraction, and the reader has an automatic expectation that your ‘intent’ may have a relatively fuzzy correspondence to much of the verbatim content, or may be plural, or only very loosely defined. A more straightforward approach is much more liable to backfire. Today’s ‘idea’, such as it is, is something that I would absolutely have expressed via story in the past – so we’ll try nonfiction and see how it goes.

On another note, this isn’t a report on my tepid adventures in meditation. That is coming (oh, the excitement), but I had a busy weekend, in an intensely slothful sense of the word, and want to try a couple more things first. If I choose something more ‘one and done’ next week then things should end up back on track.

So, today I thought I’d just briefly ramble on a shower thought I had.

It was prompted by the memory of an overheard conversation – I was sitting in a GBK (for some reason), next to a pair of young men who it would be safe to assume were students. Young Man A mentioned something about “Zeno’s paradox”, which isn’t particularly descriptive. Young Man B requested a description, and was supplied with a sort of hybrid of this:

“In the paradox of Achilles and the tortoise, Achilles is in a footrace with the tortoise. Achilles allows the tortoise a head start of 100 meters, for example. If we suppose that each racer starts running at some constant speed (one very fast and one very slow), then after some finite time, Achilles will have run 100 meters, bringing him to the tortoise’s starting point. During this time, the tortoise has run a much shorter distance, say, 10 meters. It will then take Achilles some further time to run that distance, by which time the tortoise will have advanced farther; and then more time still to reach this third point, while the tortoise moves ahead. Thus, whenever Achilles arrives somewhere the tortoise has been, he still has some distance to go before he can even reach the tortoise.”

And this:

“Suppose Homer wishes to walk to the end of a path. Before he can get there, he must get halfway there. Before he can get halfway there, he must get a quarter of the way there. Before traveling a quarter, he must travel one-eighth; before an eighth, one-sixteenth; and so on.”

(Both descriptions lifted straight from Wikipedia – if you’d like something a little more academic, try here.)

Young Man B expressed his incredulity that somebody so presumably intelligent as Zeno could have said something so obviously stupid, then uttered the words that triggered the shower thought, some weeks later: “Surely that can be solved by…”

I’ll admit that I didn’t hear what followed (probably I was busy eating a burger). It may have been ingenious. It may have been a revelatory new insight into motion, space and/or tortoises. Even if that were the case, however, it wouldn’t matter much to the topic at hand.

What I found striking was the automatic assumption that a paradox is something you’re supposed to solve – a problem for which, given time and thought, a satisfactory answer will emerge. That seems like some combination of optimism, stubbornness, and misinterpretation – but it’s an attitude that we see represented in STEM-ish conversations (and elsewhere) all the time. There’s a sense that if something hasn’t been comprehensively solved, that’s only because we haven’t gotten there yet. There is much less often a sense that ‘getting there’ and ‘solving’ may not be as broadly applicable as we might like.

Now, as ever, this is all overly simplistic. There’s a million different things that somebody might consider to be a paradox, and a million different ways to engage in the practice of ‘solving’ them, and many of the combinations of those two things are going to be totally sensible.

I suppose the gist of what I want to say is this:

A: A paradox is not necessarily a ‘problem’ or ‘statement’, in the sense of being something that should be proved, disproved, or otherwise resolved, and it is not necessarily fruitful to think of one in this way.

B: It may be more fruitful to think of a paradox as being a compact expression of an interesting discontinuity or pitfall in human thought/perception, or as a way of describing a categorically insoluble problem. In other words, they’re thought experiments at heart. Their most consistently valid application is in promoting discussion on a topic that either has not or cannot be solved.

Returning to Zeno, let’s give him more credit than Young Man B did. Zeno is not saying that Achilles cannot overtake a tortoise, nor does he expect for a moment that anybody will ‘agree’ with his account of their race. That doesn’t seem like too big a presumption. What he’s actually saying may be a little fiddlier. Our friend Wikipedia states that it is “usually assumed” that Zeno’s assortment of paradoxes were intended as a defence of Parmenides. That is to say, that he was looking to promote a Parmenidean view of ‘existence-as-one’ by demonstrating the absurdity of the perceived alternative – an infinite descent into discrete fragments. Perhaps that’s the sort of polemic context that leads people into a ‘solving’ mindset rather than an exploratory one. It seems better to look at Zeno’s offerings in the following sense:

A: An obviously true premise. To walk down the entirety of a path you have to walk down half of it.

B: An obviously false conclusion. Therefore, you can’t ever walk down the path.

C: An invitation to have fun in the space generated by those two things.

Now, perhaps Zeno did believe so firmly in the impossibility of motion that he felt like he was presenting statements of pure fact. That’s unlikely, but it still wouldn’t remove the world’s ability to play around with those statements and the disconnect that they express. Likewise, Escher doesn’t expect you to start building impossible structures, “This sentence is false” is not an impossible natural language sentence, and you certainly don’t need to find ways to justify your intuition that one door is just as likely to conceal a goat as the other. Let’s not even touch Schrödinger. All of those things have been excellent at instigating and sustaining discourse in both academic and popular milieux, however. That’s their strength, whether or not there is or ever will be a ‘solution’.

Which is not to say that solutions cannot emerge – especially in fields that are well equipped to develop proprietary solutions to their proprietary paradoxes. If somebody, in paraphrasis via Wikipedia article, were to say this…

“Let us call a set “abnormal” if it is a member of itself, and “normal” otherwise. For example, take the set of all squares in the plane. That set is not itself a square in the plane, and therefore is not a member of the set of all squares in the plane. So it is “normal”. On the other hand, if we take the complementary set that contains all non-(squares in the plane), that set is itself not a square in the plane and so should be one of its own members as it is a non-(square in the plane). It is “abnormal”.

Now we consider the set of all normal sets, R. Determining whether R is normal or abnormal is impossible: if R were a normal set, it would be contained in the set of normal sets (itself), and therefore be abnormal; and if R were abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion that R is neither normal nor abnormal: Russell’s paradox.”

(I’ll start using better sources at some undetermined future date. It’s hard. I’m not at university anymore and have become functionally illiterate.)

… then somebody might decide to develop an axiomatic set theory that resolved the issue. Which would be a good thing to do – a lot of people would be happy with that. Of course, it could be argued that the foundations of mathematics form a closed system that generates its own data, and so the role of a paradox therein is quite different to elsewhere. I wouldn’t go so far as to say that myself, though, because I know nothing about mathematics. Regardless, the luxury of defining axioms and then putting them into effect yourself isn’t going to be universally available, and that seems like a game-changing tool to have at your disposal.

The woolly, unsound message here is that, most of the time, a paradox isn’t your enemy. It’s not something that you have to defeat, and it’s not an attack against you or your intuitions, even if it might be being used as one. Your intellect will almost always be better spent treating it as an invitation rather than a challenge. An assumption of solvability might be comforting, but if you cling onto it too tightly then there are waters that you will never really be able to navigate. Without wishing to get too obvious, good luck explaining your existence. To return to Parmenides, ex nihilo nihil fit, and so on.  In other words, you’d best start believing in paradoxes… you’re in one. Nobody’s going to solve the universe – not now, not ever, unless something very peculiar happens. That doesn’t mean that there isn’t plenty of universe to discuss and plenty of exceptionally good reasons to do so.

Well, that’s at least as much as one total non-expert should really ever say on the subject. Again, I’m trying to find a discourse method that works. It’s hard to know what to do with these smaller ideas, apart from ‘just don’t express them’, which feels like a totally unsatisfactory solution. It’s possible that their inherent sloppiness is simply better contained by fiction.

In any case, hopefully everything made sense, and feel free to let me know why it didn’t.

That’s all for now.

TtBSLLaC: Some Very Ugly Drawings

Last week, I set myself the task of drawing “a convincing approximation of a human”. Here’s an approximation of how that went.

The first hurdle I came across was the complete absence of any pencils in my flat. This forced a decision point between acquiring some and deciding that biro would be an acceptable medium. I opted for the latter, reasoning that I would be too amateurish to actually make use of the pencil’s advantages vis-à-vis shading, that preparation is all too often little more than slightly beautified procrastination, and that giving myself the option of using an eraser would be symbolically inappropriate. To a substantial extent, this exercise is about granting myself the license to fail – something that is extremely difficult, even though failing is the only thing I could reasonably be licensed to do in almost all contexts.

The next decision concerned what sort of drawings I’d need to produce. Unsurprisingly, I opted for the most lenient possible interpretation of my own criteria (behold the complacency of self-direction). This meant that the aim was to draw portraits, with no need for a body, and that a two-dimensional, cartoonish style would be acceptable.

Since I lacked the skill to translate even the ghost of a face from my mind’s eye to a piece of paper, I needed a visual aide – congratulations to conventional wisdom for that sound advice. A photograph of somebody’s face would be the natural choice. I chose Willem Dafoe:

Dafoe, clearly, is a paragon of human facial construction, and should be a shoe-in for the role of holotype should that task ever be undertaken for Homo sapiens. More importantly, as can be verified by watching any film in which he appears, he is interesting to look at. Conventional wisdom was also quick to point out that drawing should be fun, and a photograph of a grinning Willem Dafoe seemed like objectively the most fun choice for this exercise.

Please be forewarned that none of my pictures resemble this photo, even remotely. It was nonetheless useful as a point of reference. There are also more drawings than appear in this post, for reasons both practical and ethical.

The first thing I did was to test the waters, employing the time-honoured method of ‘just eyeballing it’. Here’s how that came out:

… that’s not very good at all. He looks more like some kind of forgotten archaic Homo species, or else a fantastical troglodyte or ogre of some description. It’s an approximation of a human, but it’s certainly not a convincing one.

The next attempt went off the rails almost immediately. I drew the eyes about sixteen miles further up the head than they should have been, meaning that the only way to fill the excess of space was to have the mouth and jaw be freakishly elongated. This, too, did not look sufficiently human for my liking. It did look excellently demonic, however, so I decided to lean into that concept rather try again:

The early attempts demonstrated beyond even the slimmest penumbra of doubt that a raw, unguided approach was beyond my means, even with my trusty Willem Daphoto as a compass.  More assistance was required. I found some lists of quick tips concerning the proportions of a face – the appropriate alignment of the eyes, nose and mouth, the approximate width of the head, that sort of thing. I crosschecked them against the photo, and they basically held up. Another resounding victory for the internet.

The rules were all perfectly simple and required no special effort to observe, which was convenient. I did, however, fail to account for the slight extension of the head caused by Willem’s smile:

Now, this is still a devastatingly bad drawing, and an equally crude application of the lessons gleaned in the ten minutes preceding its creation. Nonetheless, two things are true about it:

1: It is much better than the first attempt.

2: It is undeniably a picture of a human. That human is clearly not Willem Dafoe, and they have also clearly done a number of unsavoury things in the very recent past, but they are a human nonetheless.

Satisfied that this approximation was sufficiently convincing, I focused my attempts to improve upon it not on precision or verisimilitude, but on trivial, self-directed entertainment. I did a fair few of those, so here’s a couple.

Pseudo-Willem, Benthic Deity:

Pseudo-Willem as some kind of horrible, miasmatic nightmare. This one’s honestly pretty good:

Conclusion

Drawing accurate, visually appealing representations of real-world objects and/or mental images is a challenging feat that likely requires some combination of talent and tutelage. In most cases, these will have to be supplemented by copious amounts of self-directed practice, but I’m not sure that even the most manically devoted autodidact could achieve good results without those two things – or at least the talent. That said, the large number of very good drawings out there indicate that this talent exists in more people than may realise it, and can be successfully brought out through a process of nurture that is not too galling.

Technically speaking, we can’t say outright that the diagraphically challenged are incapable of drawing as well as their more skilful counterparts. It’s just markings on paper, and given the existence of appropriate apparatus, we’re all capable of making the same markings – or at least markings of an equal quality, however that would end up being determined. In that sense, the upper limit for drawing skill would have to be universal among humans. Sadly, that sense is basically meaningless. Whilst my hand could theoretically generate the same images as that of say, Stephen Wiltshire, it’s never going to. His brain is so much better suited to the task that the gulf is effectively impassable. The majority of people would be in a monkey/typewriter situation if required to draw ‘as well as’ Wiltshire, or even somebody who was simply extremely good in a less extraordinary way. So, it’s probably the case that differences in innate (or simply current) starting ability combine with differences in plasticity, dexterity, mental image detail, etc. in such a way that de facto hard ceilings do exist on an individual’s ability to draw ‘well’ in a given context. They would certainly exist within a limited time span, which… well, we are all going to die at some point.

There are a lot of deeply fuzzy concepts involved there, but I think what I’m saying is general enough that I don’t need to worry about it.

The real conclusion, though, is that nobody is so bad that acquiring a little bit of extra help won’t allow them to produce dumb, misshapen images that are:

A: Sufficient for the purposes of self-entertainment.

B: Capable of communicating a good amount of their intended sense to an observer.

This week’s goal: Meditate.

Well, “Get drunk and watch the Super Bowl” seemed a little too easy, although it would be a first.

I’ve never made any serious attempts at meditation. This state of inaction is increasingly out of touch with the current discourse on personal welfare, which appears increasingly favourable towards intentional states of inaction. Meditative practices of various kinds now have a fairly solid foothold in the set of advice given to people who want to get their act together, approaching the level of exercise, diet, avoiding drugs, regular sleep schedules, and not buying Monster Hunter games. Those last two are pretty much a lost cause (no regrets).

Clearly, further investigation is required. There’s a whole wealth of literature on the topic, an equally broad range of methods that a budding meditator could try, and doubtless a few dozen people in Cambridge willing graciously to take money in exchange for guided sessions. So, really no excuse not to try.

That’s all for now.