This is another entry in what I’ve come to think of as “Craig Mod books,” reflections on walking and what that activity does to thought through the body. I initially read Rebecca Solnit’s Wanderlust on his recommendation from that essay and loved it. It treated walking as something with a history, with many purposes in time and in different cultures, and treated those purposes with respect and a genuine criticality that reflected the impossibility of covering as broad a concept as “walking” in a book only a couple hundred pages long.
To pass the social system off as an objective artifact determined by (quasi-) scientific processes, forecasting has to scapegoat “irresponsible” individuals for failing to live up to the terms of the forecast. Adorno writes that “the constant appeal of the column to find fault with oneself rather than with given conditions” is evidence of “the implicit but ubiquitous rule that one has to adjust oneself continuously to commands of the stars at a given time.” When forecasts end up being inaccurate, the fault lies not in the prediction methodology but the individual’s failure to adjust to the forecaster’s advice.
This piece by Robin James calls out big data through the lens of Adorno’s critique of astrology. “Big data doesn’t forecast the future but remakes the present in the image of down-to-earth stereotypes.” It gives a voice to a lot of feelings I’ve felt that push against not just the privacy angle, but having to do with casting narratives about people. It reminded me of how at one point in the history of psychology there was a debate about whether academic psychology was concerned with individuals or populations. In as much as that ship has sailed, it is nearly myopic in its concern with groups. Which is to say: it is not a deductive inference to move from generalizations about a group to an individual. Nor is it even necessarily a transductive inference, but an inductive one. That debate was settled long ago: the focus is on groups, because that’s more descriptive. But the movements of big data, modeling, and forecasting have started swinging the pendulum in the other direction. Anyway. It’s a very good article.
Also, I clapped out loud when I read this, about the self-tracking movement and how it becomes a kind of imperative to fixing oneself rather than highlighting problems to be addressed collectively:
Adorno explains how this can seem empowering but really isn’t: “The idea that the stars, if only one reads them correctly, offer some advice mitigates the very same fear of the inexorability of social processes the stargazer himself creates.” It reinforces the neoliberal myth of individual responsibility for social problems and misdirects our attention toward dumbed-down superficial solutions to complex social problems. For example, framing problems of political economy, class, and race as an “obesity epidemic” assumes both that obesity is a problem and that it is a problem that can be solved by modifying individual behavior (diet, exercise).
This gets at something I rarely hear talked about and never hear from people focused on the technological side of things.
I dropped my father’s cup today.
My dad died when I was eight. I would get up before him, make a breakfast of bagels or toast and chocolate milk, and watch cartoons. He would come downstairs and go for a run, come back, shower and make coffee. He’d drink it out of one of a few mugs, all of a similar size, and most of which I still have. He had good taste in art, and his mugs were handmade and hand-painted by local potters. They’re irreplaceable.
There’s an old story about a Zen monk who had a favorite cup. It was beautiful and prized, and he enjoyed showing it to temple visitors. He used it for everything, all the while saying out loud, “This cup is broken.” When at last he dropped the cup, he wasn’t surprised. He repeated, “This cup is broken.”
Today my grip faltered when I grabbed one of my father’s mugs. It fell, hit the rim of the sink, rolled, and fell to the ceramic tile floor. I knew what was going to happen. I said, in the quick voice that makes no sound and forms no words, “This cup is broken.”
It bounced, rolled, and was stopped by its handle. If anything it scratched the tile. I picked it up, and as I placed it in the cupboard, said, “This cup is broken.”
The world is full of nonsense, but here’s a slice of sanity including something you can buy. I’m writing this note to myself because I used it to finish proof-reading my dissertation, a task I find nigh impossible.
Write down all of the things you have to do that aren’t written down in a systematic way. Write them on a piece of paper. Or write them on index cards. Write until your brain feels empty. You can write down feelings or theories too.
Now do something with everything you wrote down. Evaluate it. If you can do it in less than two minutes, do it when you read it. Otherwise put it somewhere. Doesn’t matter where. Stacking up the index cards is just fine. Throw away or cross out the things that aren’t important. Save your feelings and theories somewhere. It’s important to reflect on those, if only as a map to see how far you’ve come through time.
Now you have a list of the things you have to do that are hard. Pick one. Now break it up, either on paper or in your head, into a list of actions in the world you need to take to complete it.
Download Forest. Set the timer for 25 minutes. Do what it says.
Don’t work more than three hours unless you feel possessed by flow and momentum.
From time to time, do the whole thing again. That’s it.
Which task manager or notes app you use matters so much less than the process by which you do things. Process matters. Your notes app and task manager matter only to the extent that they mesh with your process. (It is worth spending some time finding the right fit, but don’t confuse this quest with any kind of productivity. I find it relaxing and intellectually simulating to evaluate task managers; I’m searching for the right amount of complexity, visual presentation, and ergonomics. I’m far from finding one.)
Also, it’s nice to take time and write to yourself.
In a quiet moment in Seattle, Robert Levine, a social psychologist from California, quoted the environmentalist Edward Abbey: “Growth for the sake of growth is the ideology of the cancer cell.”
I’ve written over 250,000 words since the middle of 2016 when I started journaling every morning. That doesn’t include the six months I spent writing 150 A5 pages in a physical journal, which I gave up because I wasn’t writing as much, nor accessing the kinds of truths and insights that come from writing near the speed of thought, bucking self-censorship or rewriting in my head, and charging forward to the next thought or feeling and giving it shape and constraint in language. I like writing with pens because it slows me down and forces me to focus on expressing myself well. But that’s not quite what’s needed in the morning when thoughts are still nebulous and unaligned with the day’s tasks or past.
What will I do with all these words I’ve made? I occasionally think of drawing from them to create articles or more formalized writings, but really they exist as a record of my reasonings with myself. It takes a lot of reflection to know whether something is happening because it should be happening, because you really want it to happen, or because the wagon is riding through deep ruts and the weight of habit is a blanket of unexamined comfort we hesitate to shed. Writing in the morning doesn’t always sort this out, nor does it often unveil such clarities, but when it does it is like meditation, or a funeral for a bad decision, or laying rich soil over a withered garden.
In this exercise, I write for myself, save for this that you are reading now. This feels somehow greedy, like a dragon hoarding its treasure, or a beetle scooting its ball of dung. Which is why I’m working to write more, to share more, to leave more open my shared self. When I was a child I had a terrible problem with believing everyone had the same education and knew the same things I did. This meant that when someone asked a question, especially a technical one, I thought of them as idiots, forgetting how I’d learned everything I knew, which was through patient questions and answers from other kids, adults, and books, and eventually the internet. My incredulity was cruel, because so much of our society is predicated on shame, and I could with a glance bring shame to an adult. (How do you exist without knowing how to use computers? How can you have a job without knowing how to edit the registry?) I soon came to realize that instead of invoking shame I could evoke money, and had to learn how to treat people decently and with patience while lubricated by money. My eagerness to help people spread, without the money, when I saw over and over how the delight of empowerment radiated from someone who learned how to do something to help themselves, for good. This is an important facet of education, and I’m somewhat suspicious of people involved in it without this kind of a story of revelation. But perhaps it is a fiction I tell myself, and it is the lesser of my two stories of educational revelation, the second being the jarring juxtaposition of freedoms and learning between my Montessori elementary school with my public middle school.
Anyway. It feels good to write, and the thing that holds me back from sharing my writing is half craft, and half shame, and the shame part, for lack of craft, is representative of the part of society that needs to wither, like the garden, replaced by honest words.
One of the things about that book, Authority, is that makes the case, to me at least, that people are basically lying, or if not lying, misrepresenting themselves as experts, as people worthy of attention. But I’m coming to realize that that’s okay; it’s nearly impossible to fight against anyhow. Not everyone who shares something has to be an expert in their field. Not everyone who writes a tutorial for something has to do it as a retrospective; it’s much easier to write a tutorial while working on the very thing the tutorial is about as a kind of more formalized note-taking that’s just shared with a bit of narrative framing. That’s not bad.
But coming from academia, and the particular aggressive and ultimately bad argumentative style that was inculcated in me at Indiana University’s cognitive science program, I have the instinct to say with chest forward, “Who are you to make such proclamations?” But who am I to make such proclamations? If someone wants to talk about learning, I can say “what you are saying is not backed up by the literature.” I can even appeal to authority a little bit and just say “no, that’s wrong, it’s a lot of effort for me to go through and get citations for you but I’m saying as someone who has been in graduate school in learning science for eight years that you are wrong.”
I worry about the very authority of the source, now, after reading that book. I worry the blog post I’m reading was written by someone who doesn’t understand the best way of teaching what they’re teaching, who doesn’t have a sense of the bigger picture, who isn’t intentionally introducing concepts here and holding them back there. I worry about this doubly when I read that the blog post’s author has a book on the subject.
But that’s okay. We’re all grasping at shadows in a dark room.
Simone Weil’s On the Abolition of All Political Parties talks about how people are pulled either into the light of truth through a sense of unbiased reason, or away into the darkness through the bouts and vicissitudes of passionate desire. But each person’s pull away from the light is in a different direct, and each person’s pull towards it is in the same direction. This is a basic assumption of the goodness of democracy. (If democracy is not good, it is not what we should be practicing. If Hitler had never risen to power but the Weimar Republic still committed the atrocities of World War II through a democratic process it does not make those atrocities somehow less atrocious.) Political parties, on the other hand, serve to focus and align the chaotic desires of darkness and bundle them together into a force with power. It makes no sense, these days, to criticize someone for saying “As a democrat,” or “as a republican,” but these statements are parroted nonsense. It is not interesting or useful to know what a representative’s party stands for when we purportedly elected them for their ability to represent our own or on their views being a close enough reflection of our own. For this and other reasons all political parties should be abolished.
We should replace them with a kind of informal forum, or as she calls them a series of journals, to which one may read or contribute, but to which one would not belong, but orbit, or be a reader of, or a writer in, but never a member, or subordinate, or a parrot of. In that way each person, each representative evaluates for themselves the plans and policies and proposals of the others, asking questions and offering criticism, and each shall cast their votes, when the time comes, in the light of their own reason and the will of those whom they represent.
I basically agree with everything she says. The trouble is that it makes action difficult. One of the ways in which she criticizes political parties is how they develop a binary stance towards or against something, how they drain from the issue all nuance and use propaganda and slander to vie for power. The trouble is that our votes work this way, most of the time. One piece of legislation, arguments for or against. She doesn’t talk about it much, but in an ideal congress there would be a deliberation, proposals sent out and digested, discussion had, amendments made, and the best and most just proposal would succeed. That takes a long time, though. Maybe it should. I don’t know. But I agree with her. And her words are stark and piercing in the firelight of the current political climate.
I started reading Authority by Nathan Barry. It’s one of those cheap e-books that’s motivational. But it does get at a point I’ve sucked at: I should teach what I know, and teach what I learn. I’m starting to feel selfish keeping knowledge to myself, as though I’m hoarding it, or when I learn something and don’t share it, or specifically when I don’t reinterpret it through my own lens, that’s when I’m most vulnerable to a kind of guild the educated privileged feel when they encounter someone who doesn’t know that what they do even is something one could do, let alone the specifics of what it is that you do.
I explained skeuomorphism to someone yesterday and got paid to do it. It was fun and easy. I explained it in the context of redesigning a very simple and flat user interface for an educational iPad app. They wanted to have better icons for students who were just starting to read. That makes sense: students might not be able to read what the “Erase” button says. But I had to caution him against his instinct of making it “look like an eraser.” Which eraser, from where, and how do we make that look like an eraser while keeping the rest of the theme consistent. I explained the difference between an iconic representation of something and a realistic representation because they were going to talk to an artist to commission some icons, and I wanted to make sure my app design wasn’t compromised, which tries so hard to be faithful to the physical thing that inspired it.
I could very easily turn around and write a blog post about that. Just to share that little bit of story. Or go deeper, explaining that iconic things ultimately try to evoke affordances, to show what a thing is for, while realistic things focus more on trying to convey the metaphor. Imagine we want an icon to replace a button that says “Erase.” Our choices are between a realistic depiction of a pink gum eraser or a trapezoid with a line underneath it to one side. The former is obvious; we’ve seen erasers before and used them. Their affordances are pretty transparent. As long as I know what to do with an eraser like that, it’s easy to infer what the button does. The trapezoid with a line underneath it, meant to convey the standard gum eraser and its action, mainly dragging across a page to erase, relies on one large hurdle: the user has to infer what the trapezoid is.
If the design is well-done, proportions just so, then maybe it’ll succeed. But children are not the best at focusing in on a single interpretation of an iconic design; this is why our children’s applications tend to have a lot of detail, shading, color, sound, and responsive interactions. I don’t think any of this is necessary. (I see a lot of this as trickery to entice children to stay in your application, a kind of frosting and sprinkles, or as is parroted without much critical thought, “chocolate-covered broccoli.” Make an apple or a carrot. Still sweet, fun to eat with nothing but your hands, more sustainable, and better for those who want to eat more.) To succeed with a minimal design in technology for children indicates good design.
Or not. When I was a kid, in our (Montessori) school we had about fifteen computers scattered about our classroom, for a class of 30 kids between 4th and 6th grade. Many of the tasks we were assigned to do every quarter involved progressing through a series of problems about grammar or arithmetic administered through computer programs written by the head teacher (in QBasic, the language we were all taught to program in). The computers ran DOS on old monitors; they were donations, mostly from professors and business people who upgraded their homes or offices or labs. The school was a non-profit so their donations could be written off on their taxes. The programs were simple things, white text on a black field with color here and there, used to represent and connect to other parts of the curriculum. (I still think about these colors and shapes when I think of the parts of speech; the point of the task, which starts in 2nd grade with big metal manipulatives, is to develop a kind of synesthesia when thinking about a sentence and its constituent parts.) They were simple. But they were engaging enough that we played them on our own time, which at a Montessori school is most of the time. We never dreaded them, and only complained about them in the same way that workers in an office complain about any work they’re expected to do. They were enough. And they’d hold up now, though a four-year-old with an iPad is afforded far more complicated, nuanced, and potentially better learning design and spaces than what we were given with a 5lbs steel keyboard in front of a CGA screen atop an 8086.
Anyway. I wrote about something I know about. It felt good. You should do it too.
The hamburger menu seems ubiquitous. I think it’s symptomatic of a kind of thinking that we need to exercise from design. It gets used by designers to disempower users, and confusing or frustrating people is not the same thing as increasing engagement. Intentionally confusing those two things in a client’s mind in order to misrepresent them is violent.
Developers and companies typically want to increase engagement with whatever they make. It means people have either their application, their brand, or ideas the designing organization wants to perpetuate in the front of their mind, paying them in attention. This arose both from a desire for influence—the more someone uses a well-designed product with good interactions, the more they’ll use it in the future, leading to more money or engagement or user data—and a desire to present ads to a somewhat captive audience. One of the ways this manifests for users is in an effort to isolate them in their task: they’d like to navigate away or leave their screen or task but find themselves without the means to do so. (Apple notoriously solved this problem with a physical and ever-present home button so users always had a way out at their literal fingertips. This is now relegated to a non-intuitive swipe interaction in the iPhone X.)
The purpose of the hamburger menu is to isolate. Ostensibly this nearly ubiquitous icon came about in an effort to hide user interface elements on smaller screens with less usable visual real estate. It has since become a way, even with all the screen area of a 27” iMac, to lower the probability that someone changes screens or moves away from what they are currently doing, what the designer wants them to do. This is usually described as a way to “reduce clutter” or “simplify the design.” It is chickenshit minimalism.
I want to drive a wedge between making it more difficult for a user to change screens and making content more engaging to keep people engaged. The hamburger menu gets used by designers to remove navigational elements of an interface, which drives up metrics like the time on task, and management celebrates. Yes, smaller screens and finger input limit the number of functional touch targets, which lowers the maximum interaction density of a screen. But this is not the same thing as making content more engaging. It’s just making navigation more difficult. It makes it harder for the user to leave.
Constraining navigation is not the same thing as increasing engagement, except in as much as the engagement is frustration, and that frustration gets increased. Many designers do not want to admit that many of their design patterns are hostile to their users or serve to limit them to serve the values of the commissioning company.
Design is a series of tradeoffs between creating and constraining affordances. The hamburger menu is a poor and unjustified constraint, and using it to isolate a user is a kind of violence against that user, and not drawing a distinction between these things for a client or manager is hostile and violent. The alternative is to present the user with the first level of navigational elements, the first level of a nested menu, the main verbs or nouns of the application. Touch targets don’t have to be big, and they don’t have to be foregrounded. But don’t confuse locking a user in a room as increasing that user’s engagement and infer that increase of engagement is indicative of pleasure, or happiness, or usefulness. (This is the same reason most learning analytics are bad.)