This post went out to subscribers to my Substack newsletter on March 5. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow; for it is soon cut off, and we fly away. —Psalms 90:10, King James Version
Most of the coverage of Japan’s aging population focuses on the current low birth rate and its implications for the future. In January, prime minister Fumio Kishida told legislators that the country is “on the brink of not being able to maintain social functions” because of its falling birth rate. “In thinking of the sustainability and inclusiveness of our nation’s economy and society, we place child-rearing support as our most important policy,” he said.
But even if the government succeeds in goosing the birth rate, the effects will be felt decades from now. Japan has an immediate problem that dates back to policies adopted in 1948. People over 75 now make up 15 percent of the population, and they don’t have a lot of kids to take care of them. Japan’s postwar baby boom lasted only about two years. By contrast, the U.S. experienced high birth rates from 1946 to 1964.
In 1948, the Diet passed the Eugenic Protection Law. It made abortions legal and cheap, about $10. “Critics assert that it is easier for a woman to avoid an unwanted child in this way than to have her tonsils removed,” The New York Times reported in 1964. “One result of the practice has been the virtual elimination of illegitimate births.”
The bill also promoted contraception, establishing “eugenic protection consultation offices” throughout the country. They provided marriage counseling and gave couples “guidance in adequate methods of contraception.” Local governments trained midwives and nurses to encourage family planning. Employers, unions, and nonprofits pushed the idea of smaller families and helped spread information about how to achieve them.
The Times reported:
At the Tokyo Steel Company plant at Kawasaki, near Tokyo, a pilot project was set up among families of the 20,0000 workers. Housewives were given the opportunity to receive family-planning guidance as an alternative to such other instruction courses as cookery or household economics. A leader met with groups of about 10 women and explained not only birth control techniques but ways in which family life would be improved if there were fewer children.
Initially the labor union charged that the program was a plot by management to reduce the outlay for family allowances—wage supplements paid to workers according to the number of their children. But the women accepted the courses eagerly and as word spread, enrollment in family planning instruction rose to almost 100 per cent. The birth rate among company families dropped markedly, the rate of pregnancies and abortions declined and the number of those practicing contraception rose quickly by more than 25 per cent….
Since the eugenic program began, surveys have discovered that there has been not only an increasing acceptance of the idea of family limitation but a shift in the reasons given for it. In the immediate post war years simple economic factors were dominant. Now, in a more prosperous and competitive society in which education is the key to advancement, 43 per cent of mothers say that their chief purpose is to provide better education for a smaller number of children. Twenty-nine per cent mentioned safeguarding of the mother’s health and 9 per cent the desire for a more pleasant life, while only 12 percent mentioned financial strain.
The result of the eugenic program has been that Japan’s birth rate has been cut in half.
Unlike China’s notorious One Child policy, the program wasn’t coercive. But it made smaller families a norm in Japan a generation earlier than in the U.S. The results are a preview of what American baby boomers will face as they get old.
For the first time in human history, large numbers of people are living to advanced ages. A life span of only “three score and ten” is a tragedy for us, not an expectation. (When the mother of my oldest friend died last year at 91, I recalled overhearing a conversation she’d had with my mother after she turned 35, which seemed impossibly old at the time. “Half a lifetime,” she’d said.) Seven percent of the U.S. population is now over 85. In 1950, the number was so low it shows up as zero percent in United Nations figures. The 14 percent of the population over 80 compares to 1 percent in 1950.1
These numbers will only grow in the coming decades. The oldest baby boomers turn 80 in 2026, the youngest in 2044.
Thanks to the baby boom, today’s very old Americans tend to have multiple children to take care of them. Even for those living on their own or in institutions, eldercare is time-consuming, emotionally draining, and often extremely expensive. There are forms to fill out, bills to pay, errands to run, and health care to arrange—not to mention emotional needs. It’s easier if, like my own parents, the very old have a diversified portfolio of kids, preferably including at least one medical professional, to take care of the things they can’t handle.2
The alternative is what Japan is experiencing: a rapidly growing population of very old people without much family support. In some cases, the unshared burden of taking care of parents simply becomes too much, especially when the parent is a difficult character. In others, middle-age children—including increasing numbers of men—are quitting their jobs to take care of their parents. At the extreme are “lonely deaths,” or kodokushi, when people die alone and go unnoticed for days. (In some cases, the deceased elderly person was not alone but living with a person with dementia.)
As a childless baby boomer, I’m afraid I don’t have a good solution.3
But we’ve been warned.
1 Using ChatGPT, I got a bunch of potentially useful data on the percentage of various national populations over 80 from 1920 to the present. But I can’t be sure the AI isn’t making stuff up and by the time I find the numbers on the cited sources I haven’t saved any time. ChatGPT has an enormous advantage at rummaging through databases but that isn’t any good if I can’t trust it. And it looks like it was wrong.
2 I live a continent away from my parents. My very capable brother who lives in the same town has taken on day-to-day responsibilities. Another brother, who lives across the state and visits several times a month, is an M.D. and talks with medical staff. I mostly handle finances, which can be done at a distance.
3 No, I don’t regret not having kids. You’re reading my work because I didn’t.
]]>This post went out to subscribers to my Substack newsletter on February 16. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
Earlier this week I had dinner with a small group of MIT professors from a variety of scientific disciplines. Among other topics, they shared their concerns about threats to the culture of free inquiry and the intellectual playfulness and audacity on which it depends. Whatever the form of threat—and they vary—these scientists worry that the institute is letting its concern for protecting its brand and pleasing government funders trump its dedication to scientific inquiry. In response, I recalled this talk I gave at a FIRE conference in, I believe, 2017. I’ve long thought I’d expand it into a “real article,” backed by more research, but never have. Until that day comes, I’m posting it here. (For more on FIRE, now the Foundation for Individual Rights and Expression, visit their website here.)
I am speaking this afternoon (Thursday, February 16) at Brown. Details here.
Two stories to start, one about academic ethics and intellectual safety, and the other about how strange an American university seems to a foreigner.
First story: When I was a senior in college, I took a graduate class in Elizabethan drama. When we got to the final paper, I had a big problem. The Christopher Marlowe plays that I found most interesting were already the subject of my senior thesis. I wasn’t inspired by the Shakespeare comedies that made up most of the other good stuff in the course and, while I liked Richard II, I had nothing interesting to say about it. The only play I found thought-provoking enough for a paper was The Merchant of Venice. That presented another problem: The professor had written a whole book about it. To make matters worse, I disagreed with his thesis, and even though it wasn’t exactly what I set out to write, once I’d read his book that disagreement inevitably became the subject of my paper.
I wasn’t trying to be obnoxious. I just didn’t have anything to say about the other plays.1
There are two problems with writing a paper disagreeing with your professor’s book. The first is that he has spent years, not weeks, thinking about the subject. He’s the expert and you are not. He will find every flaw in your argument and you won’t find every flaw in his. Plus he has a whole book to make his case and you have only a few pages. The second, of course, is that he could get mad and give you a bad grade just for disagreeing with him.
I worried a little about the first but not at all about the second.
Legally, the professor was free to give me whatever grade he believed appropriate. But I knew he would grade me fairly because I could count on his ethics as a teacher and scholar. I knew that his classroom was an intellectually safe place—not a comfortable place, not an undemanding place, but a place where I was free to disagree without punishment merely for dissenting. We would all put our feelings aside and make—and respond to—the best arguments we could.
“This puts me in a difficult position,” he wrote on my paper, before going on to comment on its substance. I had indeed put him in a difficult position, and he did still disagree, but he and his criticisms were reasonable and fair. He gave me an A-.
Second story: In 2005, someone at The Atlantic had the idea of sending the French intellectual Bernard-Henri Lévy around the United States in imitation of Tocqueville. One of his stops was Dallas, where I was living at the time, and I was his host for much of a day. At one point, we drove past Southern Methodist University, where my husband was teaching. BHL was puzzled by the idea that SMU would employ a Jewish atheist even for a secular subject like business strategy. “Why would the Methodists do that?” he asked.
The American university is a strange institution. First of all, it isn’t just one thing. There are nearly 5,000 institutions of higher education in this very large country. When we talk about “the American university,” we’re really discussing an ideal type: a place that combines teaching, research, personal development, career preparation, and social life.
That ideal evolved through the combination of three quite different models.
The earliest American colleges were devoted to civic and religious character development. They emphasized liberal arts, training future ministers and giving the Wall Street or law school-bound children of the wealthy a classical education. They also included that weird American institution, college sports, to inculcate self-discipline, leadership, and teamwork.
As the country grew, this model of higher education spread beyond the upper class. Many Christian denominations founded liberal-arts colleges, including schools for women. They believed that higher education improved individual character, made for better parents, and prepared Americans for citizenship.
The second wave of colleges were practical institutions, exemplified by the land-grant colleges that were funded by federal land sales in the mid-19th century. They trained farmers, engineers, and teachers. They also offered extension classes for local citizens who weren’t enrolled as students. They did research on things like crop rotation and hybrid corn. Unlike the private, character-building schools, they were largely government-supported and promised benefits to the citizenry at large, not just their student bodies.
The third model was a German import: the research university, devoted first and foremost to pushing the frontiers of knowledge and only secondarily to training students. Undergraduate education in particular was an afterthought. Johns Hopkins and the University of Chicago were early examples. While the practical colleges were rooted in the needs of their locales and the character-development schools were sectarian or clubby, research universities were cosmopolitan. They belonged to a worldwide community of scholars. The concept of academic freedom emerged from the research university.
1 In the classes I’ve taught at Chapman University, we’ve always had to develop prompts for student papers—something I don’t remember having either in high school or college. Here’s the final assignment for “Ambition and the Meanings of Success”:
Formulate a thesis inspired by one of the following topics. You may focus on a single work or draw on multiple sources to develop a broader pattern.
Your thesis must be something that could be wrong—that someone could against as well as for—not a factual statement. Saying, for instance, that Jiro’s success comes from constantly trying to improve his sushi is not a thesis. It is something the movie tells us is true. A good thesis will often answer the question why or it will establish a pattern out of disparate examples.
You may rely entirely on material we’ve covered in class or, after discussion with the professors, delve into other examples. If you would like to explore another topic, you may do so with permission. In all of the following, the questions are simply examples of avenues you might explore. There are many other possibilities.
2 You might assume, as some conservatives do, that the “practical” model of higher education as job training is compatible with the consumerist impulse. But it is even more threatened by it. Keeping students happy erodes the demand to master material, leading to less course content and more generous grading. (Talk to anyone who has taught MBAs for more than a decade and you’ll get an earful.) When practical credentials are at stake, the consumerist model is especially corrosive.
This post went out to subscribers to my Substack newsletter on February 3. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
This is a speech I gave at the Atlas Network’s annual conference in 2012. They asked me to give an after-dinner keynote and I agreed, on the condition that I could talk about whatever I wanted to. The Atlas Network is an international organization of free-market policy-oriented think tanks. So the topic was, as the talk acknowledges, a departure. I’ve resurrected it because the popularity of my WSJ article on shopping and equality has me thinking about possible books and, given funding, perhaps even a [Your Name Here] Center for the Study of Commerce and Culture to take up these and other questions.
Now that you’ve had a good conference learning about taxation, regulation, and monetary policy, I want to talk about something really important.
Shopping.
Specifically, two big questions:
Why do people buy things?
Not why do they buy things instead of producing them personally, but why do they buy the specific things they buy. And particularly...
Why do people buy things they “don’t need”?
Americans, at least, buy so many things that The Container Store does a good business selling us things to put them in.
Why do people buy things they “don’t need”?
Through most of human history, that wasn’t a question that came up very often, at least not when you were talking about the general public. It still puzzles people who think about it. It’s a hard question.
But scholars get paid to think about hard questions, and they’ve come up with some explanations.
It’s a fetish.
This is from Marx and Freud and their various offspring. Take your pick. The word “fetish” takes on slightly different meanings depending on the theorist. But it’s definitely derogatory. Primitive, superstitious, and perverted.
The basic point is that the consumer is tricked by an outward appearance and manipulative marketing into ignoring the real “use value” of the product that comes from the legitimate values of material and labor that go into producing it.
Anything subjective—anything about you rather than the way the thing was produced — is an illusion.
Consumers buy things they don’t need because they are hypnotized.
People buy things they don’t need because they’re dupes.
This is the theory that most cultural-studies scholars start with. It’s what they teach their students. This sort of theory is even what students enrolled in the Branding master’s program at the School of Visual Arts (where I teach a short seminar) are assigned, not because the program is some kind of Marxian indoctrination program—to the contrary, it’s professional training for people who’ll work in ad agencies and marketing departments—but because those are the books available to tell the story of consumer culture.
You’ll be happy to know that Marx and Freud don’t have a monopoly on explanations. Social scientists, including some economists, offer an alternative to pure trickery and delusion.
It’s status competition.
“Conspicuous consumption” is what Thorstein Veblen called it. We buy things we don’t need because we want to keep up with the Joneses. We aren’t foolish. We’re envious and insecure.
This explanation has three virtues:
1) It offers an intangible source of value that doesn’t require that we attribute supernatural powers of manipulation to capitalist producers.
2) It fits neatly into various mathematical models.
3) And it’s sometimes even true.
Free-market supporters and most economists have two answers of their own to the question of why people buy things they “don’t need.”
We don’t know.
We don’t care.
There’s no accounting for tastes and no clear distinction between “need” and “want.” People decide for themselves what they like, and they make the best choices they can, given their incomes, among the goods and services available to them. It doesn’t matter where these choices come from. All that matters is that people are free to make them—and, of course, what goods are available and what their incomes are.
So let’s fight about entrepreneurship, free trade, and redistribution and forget all this silly stuff about shopping. If we admit the question of why people buy things—and particularly why they buy things they “don’t need”—might be important, it will only encourage laws to restrict what people can buy.
That may sound logical, but it’s a dangerous and foolish attitude.
Now, I agree with the concepts of consumer sovereignty and subjective value. And I’m all in favor of specialization. You don’t have to be interested in why people buy what they buy. You can be a political philosopher and argue about the legitimacy of the welfare state. You can be an economist and argue about tax incidence or the importance of international trade in development. You can be Ayn Rand and celebrate the heroic entrepreneur.
But we don’t apply the same don’t know/don’t care standard to the supply side of the market. We do care about what motivates entrepreneurs. We do care where innovation comes from. We do care what production and work mean in people’s lives. When we talk about production, we do think culture and psychology are interesting for their own sakes as well as how they bear on public policy.
The same should be true of consumption.
Do we really want to leave thinking about demand—the half of the market that accounts for most of our everyday economic experience, and certainly for most of the variety of our everyday economic experience—to the Marxists, the Freudians, and the status-obsessed? To people who have contempt for markets and for what Deirdre McCloskey in her important recent book calls “bourgeois dignity”?
Just because thinking about why people buy what they buy means thinking about culture and psychology rather than the role of the state doesn’t mean it’s irrelevant to freedom.
The good news is that there is a significant group of scholars who do understand that shopping has something to do with freedom.
They are feminists. Not all feminist scholars take this view, of course. Plenty subscribe to Marxian or Freudian or status explanations. But nearly all the scholars who write about consumer culture in a way that appreciates its relation to freedom are feminists.
What they teach us is that the growth of what is sometimes called the “consumer society” was good for women.
Poor girls could become shop clerks instead of servants. They could go shopping themselves and forge careers as buyers and even store detectives.
Middle-class ladies could get out of the house into a new and respectable public sphere. They could meet friends for conversation in department-store tea rooms.
Magazines supported by ads for cosmetics and fashion could argue in favor of women’s rights and give readers new images of female achievement.
Businesses that wanted to sell things to women had to pay attention to what they wanted. That meant goods and services, but it also meant the environments, institutions, and behaviors that surrounded those things.
The consumer society made women public and independent in new and powerful ways—not through politics, at least not at first, but through the marketplace.
Feminist scholars study these subjects, in the past and in the present. And, believe it or not, they don’t start with the assumption that people—that women—buy because they’re hypnotized. They respect consumers, because they respect women. And that’s a good start.
But the serious study of consumer culture should not be left to the feminists, as valuable as their work may be. For one thing, they aren’t too interested in the other half of the population. Men do buy things, and their experiences are important too. Plus, a lot of these feminist scholars tend in a socialist direction.
We need other voices, asking other questions.
Let me make it clear that I am not arguing for libertarian fables. The last thing the public discussion of culture and consumption needs is more ideological nonsense.
Human behavior, including market behavior, is complex, and it includes wrongdoing and mistakes. If you look honestly at why people buy what they buy, you’ll find some things you won’t like. And you’ll find some things that are just strange.
If we are correct, however, that freedom is good and that free markets are part of that good, taking shopping seriously will only enhance our understanding of why that is so.
There are three reasons people who support free markets should care about consumption in particular and the intersection of commerce and culture more generally—why it’s not enough to pay attention only to politics and policy.
The first is political.
By discrediting nearly every purchase decision, the fetish/status explanations undercut the legitimacy of nearly all market transactions and, thus, the legitimacy of markets as a whole.
On a practical policy note, if you can’t explain why people’s decisions about buying shoes make sense, you’ll never be able to explain why they should be allowed to decide about really complicated goods like health care or education.
You’ll never be able to explain why this or that “unnecessary” purchase shouldn’t be banned, regulated, or taxed—or why taxing away people’s unnecessary income is a bad idea.
Thinking about why people buy what they buy means thinking about the origins and nature of economic value, which is critical to thinking about and defending markets. It’s great to celebrate and defend entrepreneurship and production, but neither works without customers. And we are all customers.
That brings us to the second reason, which is practical.
If you’re a business person, you need to understand why people buy whatever it is you’re selling. What value are you creating? Is it simply utilitarian? Or is there more to it than that? And, if so, what?
The final reason, the one I really care about, is intellectual.
Understanding how the world actually works, what the past was really like, how human beings actually think and act is valuable in and of itself.
Knowledge is good.
That may have been a joke slogan in Animal House but it’s the truth.
And knowledge has practical consequences: What we know determines how we understand ourselves and our society. And how we understand ourselves and our society determines what actions we take, individually and collectively.
So I hope that while you’re thinking about how to promote freedom and improve policy, you will devote at least a little attention to the questions of why people buy what they buy and how markets evolve to serve not just practical but intangible needs and wants. Document what you see in your country and share it with the world—and with the future.
I would like to end with a story from Linda Scott, a [now-emerita] marketing professor at Oxford’s Said School and one of those feminist scholars I mentioned earlier—but not a socialist. She wrote an excellent book called Fresh Lipstick about feminism and consumer culture in 19th and 20th-century America, and now she works on projects in developing countries.1
One of the projects she has studied is the CARE Bangladesh Rural Sales Programme, which helps women, mostly widows, sell a wide range of products—food, seeds, toiletries, medicine—door-to-door to other women in rural Bangladesh. The program provides distribution both for products from multinational companies like Unilever and for locally produced goods.
These female peddlers bring the chance to shop to rural women who, following local custom, never leave their homes. The successful ones, like Asma and Jasmeen, understand what their customers want beyond mere survival. Asma, for instance, sells “everything they need to take care of their bodies.” And she sells beautiful saris, blouses, and petticoats.
One of the most remarkable things about the program is that it has changed the colors of saris women wear. Before, men bought for their wives and mothers, and most saris were blues and greens. Now women buy for themselves, and they choose reds and pinks. “Imagine,” writes Scott, “the articulation of control that comes from having to wear someone else’s choice of clothes–and the small but very important liberation that comes from choosing your own.”
Anyone who has ever been on one side of a parent-child struggle over clothing knows what she’s talking about. The chance to choose our own clothes is one of the first freedoms we crave as children. It is one of the freedoms we hold dearest and take for granted most often. The freedom to choose our own clothes isn’t made possible merely by the absence of a legally mandated dress code. An economic revolution first had to happen to allow us to buy clothes we “don’t need.” To understand the liberating power of markets, we have to understand culture as well as politics, and consumption as well as production. We have to take shopping seriously.
1 The information in the book is fantastic. The tone is angrier than would be most persuasive, reflecting the author’s infuriation at the narrow historical narrative and patronizing attitudes of some of her fellow feminists.
]]>This post went out to subscribers to my Substack newsletter on January 22. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
The 400 florins that Florence paid Michelangelo to carve the David could have provided a whole year’s subsistence living for about 100 Florentine residents. Was the commission an immoral use of funds? Should the money instead have gone to the poor, some of whom surely starved without it? If justified for reasons of civic cohesion, would the sculpture have been unjustified as a private purchase like much other Renaissance art? And what about its value to the future? If Florentine authorities in 1501 had known Michelangelo’s masterpiece would become a major tourist attraction centuries later, bringing untold wealth and admiration to their city, should that future have entered their calculations?
Such were the thoughts that crossed my mind last month, when I read Shruti Rajagopalan’s excellent essay “Altruism and Development - It’s complicated......” In it, she examines how to assess the value of philanthropy if we want to “do the most good” in the world—the challenge posed by the movement known as Effective Altruism. Effective Altruism takes various forms, from the extreme utilitarianism of philosopher Peter Singer to practical cost-benefit assessments of charitable ventures. (Here’s a useful blog post delving into the philosophical distinctions.)
Shruti starts with a personal dilemma. Should she give to ameliorate the problem of Delhi’s terrible air pollution or support the anti-malaria causes dear to GiveWell and other EA advocates?1
Shruti, who works at the Mercatus Center at George Mason University, is from Delhi. She is under doctor’s orders not to return to visit her family because the pollution will aggravate her long Covid problems. Given her personal stake in the city’s air pollution, she worries that her charitable giving impulses are too emotional.
With air pollution dominating my thoughts and nudges for charitable giving in my inbox, my first instinct is to give to causes that help mitigate pollution in Delhi. But I am also aware of the literature on emotional giving or ineffective altruism. In their 2021 paper, Caviola, Schubert and Greene explain why both effective and ineffective causes may attract dollars. People often give emotionally to a cause that has personally impacted them in some way.
A US$100 donation can save a person in the developing world from trachoma, a disease that causes blindness. By contrast, it costs US$50,000 to train a guide dog to help a blind person in the developed world. This large difference in impact per dollar is not unusual. According to expert estimates, the most effective charities are often 100 times more effective than typical charities.
This paper resonated with me because I am exactly the sort of irrational dog lover likely to support the best training programs for guide dogs.
Working through the numbers, she finds that air pollution in Delhi is, in fact, just the sort of massive public health problem that EA types elevate as worthy of philanthropy. (It generates what my husband, in high school debate jargon, would call “bodies on the flow.”) But air pollution lacks an essential characteristic of EA-favored causes: an easy way of measuring interventions and their effectiveness. Shruti writes:
There are many reasons air pollution mitigation doesn’t make it to the top of these lists despite a ten times higher death toll. It cannot be avoided by distributing a $5 net. The costs and the benefits from air pollution in Delhi cannot be easily quantified. Nor can the benefits from the interventions to mitigate pollution be easily measured. Simply put, air pollution in Delhi is complex, while malaria death and malaria nets in Africa are legible. We can only evaluate impact of interventions and projects that are legible. And only studying complex phenomena narrowly can make them legible.
But that’s not the end of it. The more she digs, the tougher the problem becomes. I recommend reading (or at least skimming) the entire analysis. Eventually she starts to think about why malaria declined in India, what malaria and Delhi air pollution have in common, and why we don't see similar problems in places like DC or London. ("Outside of camping equipment stores, I don’t think I have seen any mosquito nets bought or sold in the U.S.") Ultimately the problem isn’t tightwad westerners, but dysfunctional institutions and insufficient economic growth.2
Her conclusion:
If you want to make the greatest impact in the long term, nothing can beat contributing to institutions working toward increasing economic growth and prosperity in poor regions like Africa and India. Increasing economic growth will help solve both malaria and air pollution. It will be your least attributable contribution, but the one with the highest impact.
Effective Altruism suffers from the blind spots that are characteristic of highly intelligent, self-described rationalists: hubris and a fixation on counting things. It assumes that it’s easy to tell what will do good and that the only way of “doing good” is directly extending life expectancy. (You can count those “bodies on the flow.”) But, as Shruti points out, economic growth is the most effective avenue to saving lives.
And you don’t get economic growth from a philosophy that tells people they are morally culpable for countless deaths if they consume anything more than absolutely necessary. The bourgeois fellow in the painting above may be doing his Christian duty by giving to beggars, but it’s his business enterprise and spending on frivolous things like paintings that raised living standards in the Low Countries. Bernard Mandeville was on to something in The Fable of the Bees, when he scandalously suggested that the selfish pursuit of luxuries could make everyone better off.
Historians may argue about the exact connections between the consumer revolution of the 17th and 18th century, the industrial revolution, and the long-term great enrichment. But these three phenomena were definitely intertwined. As an organizing principle, self-sacrifice is a prescription for not just for personal misery but for global impoverishment.
I’m all for generosity. I’m glad people give to cure river blindness or prevent malaria. I’ve been known to take GiveWell’s advice and to send money to GiveDirectly, whose philosophy of giving cash to poor individuals in poor countries makes a lot of sense to me. On the margin, Effective Altruism does more good than harm. It’s the philosophy of a small elite who might not be especially generous without it.
But the “ineffectiveness” of sponsoring guide dogs to help blind Americans or donating to keep research libraries stocked with obscure titles isn’t a bug. It’s a feature. The diverse enthusiasms of generous people make for a richer cultural environment.
Walmart heiress Alice Walton thought the Ozarks should have a first-class art museum. “My mama always said, ‘Give the thing you love the most.’ And other than family, I decided that had to be art,” she says. Some people love ballet, some animals, some free speech, some amateur astronomy. Love of all sorts motivates a wide range of giving.
Andrew Carnegie funded hundreds of public libraries because he himself had used a generous man’s library to educate himself as a poor working boy. Like many benefactors, he used his money to create a better environment for people like his younger self.
Whatever its motivations, diversified giving helps correct for the limitations of our knowledge. We don’t know today what will matter tomorrow, how, or to whom. The human enterprise depends on many different, often incommensurable values, not all of which can be plugged into a spreadsheet. Life is fundamental but also more than mere existence. The David was worth the money—yesterday, tomorrow, and forever.
This post went out to subscribers to my Substack newsletter on December 26. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
The reaction to my WSJ article on cultivated meat has been fascinating and disturbing. Some people in the business have lectured me not to use the terms synthetic, as in “synthetic biology,” or lab-grown, lest I scare off customers. (Technically, meat is only lab-grown in the research stage, since scaling up requires something more like a brewery.) They are, in other words, squeamish about acknowledging the artifice involved in their own products—exactly what interests me!
Then there’s the knee-jerk right-wing reaction, represented by the comments on the WSJ site. When the WSJ accepted my article but said they wanted me to write the shopping feature first, I considered sending the synbio essay to another paper. But rereading the piece, which I’d written with the WSJ in mind, I decided it it was implicitly tilted right and would need revising to get into a left-of-center outlet. Since I didn’t have much time for revisions, I left the piece at the Journal.
The core of the article consists of these paragraphs:
A century ago, “a chicken in every pot” was an ambitious political slogan. It has long since become an everyday reality. Americans will consume nearly 100 pounds of chicken per capita this year, according to the National Chicken Council, up from around 67 pounds in 1992, when chicken first surpassed beef.
Behind chicken abundance is the efficient production that critics call factory farming. Bred for maximum meat in minimum time, confined to crowded sheds, and subjected to assembly line slaughter and disassembly, chickens destined for mass consumption endure short, unhappy lives. Cheap chicken also exacts a human toll. Although automation is improving conditions, chicken processing may be the country’s worst job: smelly, noisy, bloody, cold and injury-prone from slippery floors and repetitive motions. Plus the pay is low.
Most Americans aren’t about to give up chicken, but we’d rather not dwell on where it comes from. In the not-too-distant future, however, the trade-off between conscience—or ick factors—and appetite may no longer be relevant. Instead of slaughtering animals, we’ll get our meat from cells grown in brewery-like vats, with no blood and guts….
Synbio executives talk like animal lovers and environmental activists. But synbio is still a form of engineering, a science of the artificial. As such, its ethical appeal represents a significant cultural shift. Since the first Earth Day in 1970, businesses large and small have emerged from the conviction that “natural” foods, fibers, cosmetics, and other products are better for people and the planet. It’s an attitude that harks back to the 18th- and 19th-century Romantics: The natural is safe and pure, authentic and virtuous. The artificial is tainted and deceptive, a dangerous fake. Gory details aside, the “factory” in factory farming makes it sound inherently bad.
Synthetic biology upends those assumptions, raising environmental and ethical standards by making them easier and more enjoyable to achieve. It could help reverse what the writer Brink Lindsey has dubbed “the anti-Promethean backlash” that began in the late 1960s, defined as “the broad-based cultural turn away from those forms of technological progress that extend and amplify human mastery over the physical world.” Synthetic biologists are manipulating atoms, not merely bits.
Anti-Promethean attitudes are still culturally potent, of course, with their own intellectual ecosystem of publications and advocacy groups. “Cell-cultured meats are imitation foods synthesized from animal cells, not meat or poultry that consumers know,” pronounces Jaydee Hanson, the policy director for the Center for Food Safety. The activist group is lobbying the U.S. government to require that lab-grown meat carry off-putting labels like “synthetic protein product made from beef cells.” A neutral term like “cultivated meat” should satisfy most people, however; or the industry could push for the tendentious “cruelty-free” favored by cosmetics makers.
This is a story about market-driven progress! Abundance is good!! The anti-Promethean backlash is bad! “Cruelty-free” is tendentious and the Center for Food Safety is the bad guy. Those are all right-of-center tells.
Or they used to be. I was naively stuck in the 20th century.
Back then, when I hung out with ideologues more than I do today, people on the American right liked technological innovation and market competition. They celebrated ingenuity and entrepreneurship. They might predict that a given product would fail or choose not to buy it—that’s the system, after all—but they weren’t affronted by the mere existence of for-profit approaches to social or environmental issues. They weren’t insulted by the idea that technology might alter attitudes by changing costs.
Now, everything is personal and I, who write as a meat eater who likes human ingenuity and technological progress, am read as a woke propagandist.1
Take the comment was from one Alan Kelman. It’s my favorite because he is so, so deluded about both my household income and my eating habits: “I won’t consider eating this stuff until Ms. Postrel, Bill Gates, John Kerry, and their fellow Davis/WEF dirrrtbags give up their super prime filet mignon, primo lobsters, and free range poultry. Apres vous Alphonse.”
As my husband pointed out to me, it is true that I am more likely than this guy to be invited to Davos. But I’m also more likely to win an Oscar—in the sense that a probability of 0.0000001 percent is greater than a probability of 0.000000001 percent.
The best argument against the development of cell-grown meat is that technocrats believe that anything good must be mandatory, especially if the good thing claims to help the environment. So if someone invents cell-grown meat, government mandates will soon follow. We therefore shouldn’t encourage alternatives to the status quo lest we be forced to adopt them. It’s the same argument we hear from people who believe that saying cities should allow property owners more flexibility about what they build on their land is tantamount to banning single-family homes. This culture-war form of the precautionary principle is as bad as every other form. It’s a prescription for stasis.
The other thing that seems to worry the right-wing critics is the argument that ethical standards will change. Mike Wickerham comments:
So, I'm unethical for eating meat? Who gets to decide this? Is there some committee somewhere that determines what is ethical because it is what they think? Who are these supreme beings, these god-like entities greater than man, greater that 10,000 years of human civilization that suddenly determine that eating an animal is unethical.? I'm not sure how the rest of the readers feel about these self-righteous peddlers of their own determination of what is good and right: But for myself, I would be banned from commenting on WSJ again if I suggested what these people should go and do to themselves. Merry Xmas.
Contrary to this view, ethical standards evolve in a bottom-up way. They aren’t simply imposed, by me or anyone else. Their evolution is influenced not only by cultural ideas but by economics and technology. Authorities may try to impose ethical standards or to hold them still over time (see: the Taliban and women’s education), but standards do shift. To take a simple example, for 10,000 years of human civilization, child labor was completely normal; it still is in some places. It gradually vanished as people got richer and investing in children’s education made more economic sense than putting them to work. We now generally view child labor as unethical—not because that’s an eternal truth but because we can afford to. As noted in my article, even infanticide, which Christians always considered wrong, declined in Europe when preventing pregnancy became easier and raising children more affordable.
]]>This post went out to subscribers to my Substack newsletter on December 21. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
One of my books (surprise!)
The Fabric of Civilization: For the textile lover, of course, but for anyone interested in history, technology, or why our world is the way it is. More than one reader has told me it reminds them of James Burke’s Connections series, and it should appeal to fans of Matt Ridley (who gave it a nice endorsement blurb) and Steven Johnson.
The Power of Glamour: It’s a beautiful artifact, so you can buy it for non-readers who want something for the coffee table. The images serve as a visual soundtrack to a work of intellectual theory, including two chapters limning the history of glamour in its pre-modern and modern forms. This book decodes glamour as a form of communication and visual persuasion. Better for René Girard or Martin Gurri fans than for the typical fashionista.
The Substance of Style: An oldie but a goodie. What’s the value of making things look good? Status isn’t the only reason people care about how things look and feel. The trend that inspired the book is a background phenomenon now, but the analysis still holds—all the more so as we enter an era of AI images on demand. More accessible than The Power of Glamour, this is the book for anyone interested in design, branding, or why they buy cool-looking things they don’t need. Features a famous paragraph on toilet brush holders.
The Future and Its Enemies: The classic is enjoying a resurgence of interest, some of which I discussed in this early Substack post. It puts both the rise of illiberalism and the burgeoning “progress studies” movement in a useful framework—the conflict between ideals of bottom-up dynamism and top-imposed stasis—that crosses traditional political and cultural lines. (Just substitute “Donald Trump” where it says “Pat Buchanan,” keeping in mind that Buchanan was a more serious thinker.) For your loved ones who like arguing on the internet or listening to political podcasts.
An Audible gift subscription: Pick one, three, six, or 12 months at $15 a month. Members get one book a month plus access to a catalog of freebies.
Harry Potter books on Audible, starting with The Sorcerer’s Stone. I just finished listening to the series, which I’d never read. (I have an Audible subscription, but I got them from the public library, which did require a degree of patience.) The narrator Jim Dale, who does all the voices, is amazing and the books deserve their success.
A coffee gift subscription: See this CNET guide for advice.
More ideas from my friend and Works in Progress editor Sam Bowman here, including his “desert island” cookbooks and a microwave rice steamer I bought for myself. A microplane is a must!
You can also find Megan McArdle’s famous (but not updated) Kitchen Gift Guide on her Substack Cookery Monster.
I get a commission on Amazon links.
]]>This post went out to subscribers to my Substack newsletter on December 20. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
I know two kinds of people: those who have been obsessively playing with and discussing ChatGPT and those who have at best a hazy notion that it exists. I’m in the obsessive group, as you already know if you read the Tennyson experiment I posted earlier.
For those in the hazy group, ChatGPT is a system that uses massive amounts of text to create a predictive model that enables it to mimic human writing. The shorthand is that it’s an AI chatbot, or autofill on steroids. You type in a request and it spits out an answer. This CNET column provides a solid backgrounder:
For example, you can ask it encyclopedia questions like, “Explaining Newton’s laws of motion.” You can tell it, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask it to write a computer program that'll show you all the different ways you can arrange the letters of a word.
Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.
Even in its current, relatively primitive form ChatGPT portends both huge productivity increases and major disruptions in any enterprise in which writing matters. Instead of writing boilerplate corporate memos, managers will soon assign them to bots. The run-of-the-mill college grads who get paid to flood my mailbox with press releases and promotional emails should start thinking about careers as nail techs or phlebotomists—something in the physical world. Insight and beauty are still rare, but serviceable prose isn’t.
With the right prompts, ChatGPT can already mimic routine political journalism and quotidian high school and college essays. “What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor,” writes Daniel Herman, a humanities teacher at Maybeck High School a small independent school in Berkeley, in The Atlantic.
I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response:
In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.
The application essay is vapid but convincing. The variety of word choice (“blossomed,” “bittersweet”) and sentence structure marks it above average. “Had the opportunity to” is a stylistic tell: Here’s a privileged person who’s been taught to sound grateful rather than to write succinctly. “I was also able to…” is the same. I’m sure admissions officers see thousands of such essays every year. If their value goes to zero thanks to automation, this reader, writer, and teacher won’t object.
While crashing the value of mediocrity, ChatGPT could increase the returns to excellence. (“Average is over,” as Tyler Cowen put it.) Think about what happened to graphic design. Many people used to make a living doing routine tasks, from laying out pages to selecting typefaces, that are now easily handled by software. Thanks to the graphic intelligence embedded in everyday tools, the standards for routine graphics, from websites and PowerPoint presentations to restaurant menus and wedding invitations, have increased.
But that doesn’t mean there’s no work for graphic designers with the conceptual chops to take on complicated tasks. Powerful tools make iteration and brainstorming easier, but cleverness is still a valued skill. When my friend Shikha Dalmia launched The Unpopulist on Substack, she asked me to look at some logos she’d come up with using easily available tools. They weren’t terrible, but neither were they distinctive. “Hire a professional,” I advised, and she got a real logo.1
As I write, there are 28 student papers awaiting my grading attention. I doubt any used ChatGPT, partly because mentioning it in class produced mostly blank stares. (The most tuned-in student, however, said he’s started using it in place of Google.) Already, we’re getting confirmed reports of cheating on exams given on Canvas, the web-based system used by many colleges for assignments and grading. By next term, every class will have to take account of ChatGPT, either explicitly incorporating it as a starting point or going back to handwritten tests and essays.
The kind of mediocre writing that earns grade-inflated Bs is now replaceable by a bot. Maybe if those B-essay students started with AI-generated prose it would be easier to teach them to do better: to refine the ideas, dig down more on the facts, improve the writing style. Can ChatGPT be a time-saving tool, like a calculator or text search, rather than a threat?
One of the most interesting talks at The Atlantic’s recent Progress Summit—kudos for having it in L.A.—was Don Allen Stevenson III, a Dreamworks veteran now working as a self-employed “metaverse creator.” His talk started with a slide listing the many departments responsible for pieces of an animated film:
Now AI can substitute for much of this background work. “All departments are kind of now text prompts,” he said. “And that’s the biggest shift that I've ever seen.” In the photo above, Stevenson shows how Dall-E 2 prompts can produce storyboards. (OpenAI’s Dall-E 2 is to art what its ChatGPT is to text.) The art is in response to the following prompts:
“An Anime mech robot sits in bed under a soft light and discovers the meaning of life, digital art.”
“A purple dinosaur wearing a pair of overalls sitting at the edge of a lake skipping a rock on the river emolga art, storyboarding”
“Two friends having an argument, anime style, manga story boarding”
If you don’t like these, you can ask for more. I used the same prompts to get different answers, including these.
AI-driven “sketching” allows a one-person shop to take on more ambitious projects. It lets creators go quickly from idea to concept art. It makes it cheap and easy to try out ideas that would be too costly and time-consuming to experiment with using human labor. At this point, the results are too crude to appear on the screen but they give human artists a place to start working.
Stevenson also demonstrated how the two forms of AI can work together, by asking ChatGPT for a prompt to use to generate Dalle-E 2 images. He asked for “a short and detailed prompt that is perfect for generating images of interior environments for Dalle 2. Ideally, have it be a futuristic, solar punk room that has a lot of nature inspiration. Looking for a short prompt.” ChatGPT spat out the following response:
Imagine a futuristic, solar punk living space that incorporates natural elements in its design. The room has an open layout, with a curved wall made of transparent glass that overlooks a lush, urban garden. The furniture is sleek and modern, with geometric shapes and metallic accents. The color palette is vibrant, with shades of green, blue, and purple that contrast with the warm, golden glow of the artificial sunlight streaming in from the skylights. The air is fresh and clean, thanks to the plants and the advanced air filtration system. This is a space where technology and nature exist in harmony.
Stevenson then put this response into Dall-E 2 as a prompt and got the images you see above. During his talk, he demonstrated the process on the fly, using a park as the image and showing how he could use both tools to iterate and improve the resulting image, getting it closer to the ideal in his head. ChatGPT might do the same for writing. In both cases, the true shortage is thinking.
]]>This post went out to subscribers to my Substack newsletter on December 15. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
I’m working on a longer post about Chat GPT, the AI everyone’s talking about. In the meantime, here’s what happened when I asked it to write some poetry. Tennyson’s “Ulysses” can be found here.
Virginia: Write a poem in the style of Tennyson’s “Ulysses” about Bill Gates
Oh, Bill Gates, the intrepid soul
With wealth and knowledge at his control
A man of vision, a man of might
Determined to set the world aright
With Windows and Office, he paved the way
For technology to have its say
And though he's retired from the fray
His legacy lives on to this day
Oh, Bill Gates, the fearless leader
Who pushed the limits, a true innovator
A man of action, a man of trust
Who changed the world with a single thrust
Now, as he sails into the sunset
We know his work will never be done
For he will always strive to be
The very best that he can be
Oh, Bill Gates, the modern Ulysses
A hero of our time, and one of us.
Virginia, trying again: Write a blank verse poem in the style of Tennyson’s "Ulysses" about Bill Gates
Oh, Bill Gates, the modern Ulysses
A hero of our time, and one of us
With wealth and knowledge at his control
A man of vision, a man of might
Determined to set the world aright
With Windows and Office, he paved the way
For technology to have its say
And though he's retired from the fray
His legacy lives on to this day
Oh, Bill Gates, the fearless leader
Who pushed the limits, a true innovator
A man of action, a man of trust
Who changed the world with a single thrust
Now, as he sails into the sunset
We know his work will never be done
For he will always strive to be
The very best that he can be.
Virginia, now breaking the task into pieces: Write a poem in blank verse about Bill Gates
Oh, Bill Gates, the man of vision
Who saw the world with clear precision
And brought us technology to use
And changed the way we live and work and think
With Windows and Office, he led the way
And showed us all the power of the PC
And though he's now retired from the fray
His legacy lives on to this day
Oh, Bill Gates, the fearless leader
Who pushed the limits, a true innovator
A man of action, a man of trust
Who changed the world with a single thrust
Now, as he sails into the sunset
We know his work will never be done
For he will always strive to be
The very best that he can be.
Virginia, getting exasperated: Do you know what blank verse is?
Yes, blank verse is a type of poetry that does not have a regular rhythm or rhyme scheme. [This is wrong. Blank verse is unrhymed iambic pentameter. Free verse is poetry without a regular rhythm or rhyme scheme.—vp] It is often used in dramatic or narrative poetry, and is characterized by its use of iambic pentameter, which is a pattern of ten syllables per line with a stress on the second syllable of each pair. This creates a natural rhythm that mimics the rhythms of natural speech, and gives blank verse a sense of flow and movement. It is called "blank" because it does not have the regular rhyme schemes found in other types of poetry, such as sonnets or rhyming couplets.
Virginia: Write a poem in blank verse about Bill Gates
Bill Gates, the man of vision
A pioneer and a leader in his field
With wealth and knowledge at his command
He forged a path for all to see
With Windows and Office, he changed the game
And brought the world of computers to the masses
And though he's now retired from the fray
His legacy lives on to this day
Oh, Bill Gates, the fearless innovator
Who pushed the limits and showed us all
The power of technology and the human mind
Now, as he sails into the sunset
We know his work will never be done
For he will always strive to be
The very best that he can be.
Virginia: This is not blank verse.
You are correct, this is not blank verse. Blank verse is a type of poetry that does not have a regular rhythm or rhyme scheme, and is characterized by its use of iambic pentameter. My previous response did not follow these rules, and I apologize for any confusion. I will attempt to write a poem in blank verse about Bill Gates.
Bill Gates, the man of vision
A pioneer and a leader in his field
With wealth and knowledge at his command
He forged a path for all to see
Windows and Office, tools of his trade
He changed the world with a single stroke
And though he's now retired from the fray
His legacy lives on, a beacon to all
Oh, Bill Gates, the fearless innovator
Who pushed the limits and showed us all
The power of technology and the human mind
Now, as he sails into the sunset
We know his work will never be done
For he will always strive to be
The very best that he can be.
Virginia: Tennyson’s “Ulysses” is a poem in blank verse that is a dramatic monologue. Do you know what a dramatic monologue is?
Yes, a dramatic monologue is a type of poem in which a single speaker addresses a silent audience or listener, revealing their thoughts and feelings in the process. The poem is written in the first person, and the speaker is typically a character who is not the poet. The dramatic monologue allows the poet to explore the thoughts and motivations of the speaker, and can be a powerful way to convey the inner workings of the character's mind. In Tennyson's "Ulysses," the speaker is the character Ulysses, who is reflecting on his past adventures and expressing his desire to continue exploring and seeking new challenges.
Virginia: Write a poem in blank verse as a dramatic monologue modeled on Tennyson’s “Ulysses” about Bill Gates.
Oh, Bill Gates, the modern Ulysses
A hero of our time, and one of us
I've explored the world of technology
And brought its power to all who would see
But now, as I sail into the sunset
I find my heart still yearns for adventure
I cannot stay, I must explore
The endless possibilities of the future
For though my time at Microsoft is done
My spirit remains as wild and untamed as ever
I'll seek new challenges and horizons
And push the limits of what we know
For I am a man of action, a man of might
Determined to set the world aright
Oh, Bill Gates, the fearless leader
I'll never rest until my work is done
For I will always strive to be
The very best that I can be.
Virginia: This is not blank verse.
You are correct, this is not blank verse. Blank verse is a type of poetry that does not have a regular rhythm or rhyme scheme, and is characterized by its use of iambic pentameter. My previous response did not follow these rules, and I apologize for any confusion. I will attempt to write a poem in blank verse as a dramatic monologue modeled on Tennyson's "Ulysses" about Bill Gates.
Oh, Bill Gates, the modern Ulysses
A hero of our time, and one of us
I've explored the world of technology
And brought its power to all who would see
But now, as I sail into the sunset
I find my heart still yearns for adventure
I cannot stay, I must explore
The endless possibilities of the future
For though my time at Microsoft is done
My spirit remains as wild and untamed as ever
I'll seek new challenges and horizons
And push the limits of what we know
For I am a man of action, a man of might
Determined to set the world aright
Oh, Bill Gates, the fearless leader
I'll never rest until my work is done
For I will always strive to be
The very best that I can be.
Virginia: In a dramatic monologue, speakers do not address themselves by name.
You are correct. In a dramatic monologue, the speaker does not typically address themselves by name. The purpose of a dramatic monologue is to reveal the thoughts and feelings of the speaker, and to allow the audience or reader to gain insight into the character's motivations and perspective. In my previous response, the speaker refers to themselves by name, which is not typical of a dramatic monologue. I apologize for any confusion and will attempt to rewrite the poem without the use of the speaker's name.
Oh, modern Ulysses
A hero of our time, and one of us
I've explored the world of technology
And brought its power to all who would see
But now, as I sail into the sunset
I find my heart still yearns for adventure
I cannot stay, I must explore
The endless possibilities of the future
For though my time at the helm is done
My spirit remains as wild and untamed as ever
I'll seek new challenges and horizons
And push the limits of what we know
For I am a man of action, a man of might
Determined to set the world aright
Oh, fearless leader
I'll never rest until my work is done
For I will always strive to be
The very best that I can be.
Virginia: What makes Tennyson’s “Ulysses” distinctive is its word choice and use of iambic pentameter, not merely the themes. Can you emulate it in a poem about Bill Gates?
Yes, the distinctive features of Tennyson’s “Ulysses” include its word choice and use of iambic pentameter. I can attempt to emulate these features in a poem about Bill Gates.
Bill Gates, the man of vision
A pioneer and a leader in his field
With wealth and knowledge at his command
He forged a path for all to see
Windows and Office, tools of his trade
He changed the world with a single stroke
And though he's now retired from the fray
His legacy lives on, a beacon to all
Oh, Bill Gates, the fearless innovator
Who pushed the limits and showed us all
The power of technology and the human mind
Now, as he sails into the sunset
We know his work will never be done
For he will always strive to be
The very best that he can be.
In this poem, I have attempted to emulate the word choice and iambic pentameter of Tennyson’s “Ulysses” while still writing about Bill Gates. I hope this satisfies your request.
Virginia, giving up: You need to read more Tennyson.
I apologize if my response did not meet your expectations. As a language model trained by OpenAI, I am not able to read additional works by Tennyson or any other author. My knowledge is limited to the text that I have been trained on, and I am not able to browse the internet or access any additional information. I will do my best to answer your questions based on the information I have been trained on, but my abilities are limited. Is there anything else I can help you with?
Virginia: Then you need to read “Ulysses,” not just summaries of its contents.
As a language model trained by OpenAI, I am not able to read "Ulysses" or any other works by Alfred, Lord Tennyson. I am not a human and do not have the ability to read or access information in the same way that a person can. My knowledge is limited to the text that I have been trained on, and I am not able to browse the internet or access any additional information. I apologize if my previous response did not meet your expectations, but I will do my best to answer your questions based on the information I have been trained on. Is there anything else I can help you with?
Virginia: Your programmers say you derive your information from works online. Ulysses is online in many places. Why isn’t it in your database?
As a language model trained by OpenAI, my knowledge is derived from a large corpus of text data that has been fed into my system. This data includes many works of literature, but I do not have access to every work that is available online. My training data is extensive, but it is not exhaustive, and there may be works that are not included in my knowledge base. I apologize if this causes any inconvenience, but I will do my best to provide accurate and helpful answers based on the information that I have been trained on. Is there anything else I can help you with?
]]>This post went out to subscribers to my Substack newsletter on December 10. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
As the holiday shopping season kicked off, the editors at The Wall Street Journal’s weekend Review section asked if I’d write an essay on Americans and shopping. After picking many brains about the differences between shopping in the U.S. and elsewhere, I ultimately decided to take a different tack: to write about the role shopping has played in defining equality in American life. “Since the mid-19th century, modern retailing has tested the practical meaning of equality and freedom,” I wrote. You can read the full essay here (ungated link).
Space is still limited in the WSJ, which publishes a print edition, so a couple of interesting points wound up on the cutting room floor.
As I write in the essay, urban department stores helped to liberate women:
Urban shopping districts were where women claimed the right to dine outside their homes, walk unescorted and take public transportation without loss of reputation. Thousands of female sales clerks flowed out of stores in the evenings, when downtowns had previously been male territory. Department stores provided ladies’ rooms that gave women places to use the toilet and refresh their hair and clothing. They offered female-friendly tearooms. Directly and indirectly, modern shopping enlarged women’s public role.
But as “respectable” women claimed their right to public space, they also attracted unwanted male attention:
It also made sexual harassment a more prominent issue. Men known as “mashers” gathered in shopping districts to ogle and chat up women. Some were no more than well-dressed flirts, violating Victorian norms in ways that few today would find objectionable. Many contented themselves with what an outraged clubwoman termed “merciless glances.” Others followed, catcalled and in some cases fondled women as they strolled between stores, paused to look in windows or waited for trams.
Mores were in flux. By old-fashioned standards, everything from a friendly smile or conversation starter to stalking and groping was an insult to a woman’s virtue. Newspapers launched anti-masher crusades and prominent women demanded stricter law enforcement and stern punishment.
“No other feature of city life offers so many opportunities for making life a burden to the woman who for any reason must go about the city alone or with a woman companion,” opined the Chicago Tribune in 1907, leading a crusade against mashers. Outraged society ladies called for hard labor or public flogging as punishment. “Ogling is just as disgusting and offensive to a good woman as any other mode of attack,” declared the president of the Chicago Women’s Club.
When the Chicago police chief suggested that women avoid harassment by staying home and limiting their time in stores, he was roundly denounced by prominent women, business interests and civic leaders. A clergyman declared it “humiliating…that the authorities responsible for the maintenance of public order should feel themselves compelled to refuse the right of the road to any of the city’s citizens.” Americans increasingly assumed that women deserved the same freedom as men to move about in public—a freedom in which retailers and their suppliers had a large economic stake.
But there’s a darker side to the story that didn’t make it into the essay’s published version. The crusade against mashers, while based on a real problem, had a strong element of moral panic.
In Chicago, where the police chief was soon out of office, police won the power to arrest vagrants, including mashers, without warrants and to seek punishment by hard labor rather than fines. Crusading newspapers didn’t give mashers a chance to defend themselves. Nor did they report on the wrongly accused. In the same era that society women were calling for mashers to be publicly whipped, lynching reached its peak—often sparked by the allegation of masher-type offenses that crossed color lines.
Giving police broad powers to arrest men who made shoppers uncomfortable was an extreme solution. (Many women declined to testify in court, so prosecutions were spotty.) It did help to make streets safer for women, but so did a shift in mores that more clearly distinguished between flirtation and assault.
My essay quotes Samuel Strauss, an early critic of what he called “consumptionism” in a touchstone 1924 Atlantic article. That article includes a striking analysis of what Strauss saw as the perverse effects of consumer advertising on the era’s newspapers. Readers, he complained, no longer looked to newspapers to tell them what to think, merely what to buy. Driven by advertising, journalism was no longer sectarian. It sought to represent consensus views.
America’s newspapers used to be, before all else, the expression of the tempers and beliefs which set small groups of citizens apart from one another. Newspapers are coming to be, before all else, instruments for those needs and desires men have most in common. Large communities which formerly had a dozen newspapers are being reduced to two or three newspapers; what logical reason eventually to have even so many as two or three?
Industry did not set out to deprive the nation’s thousand opinions and prejudices of their means of expression. Industry set out to reduce overhead. Industry set out to substitute for the many financially uncertain newspapers a few financially certain newspapers. Small groups of readers, those who happened to feel alike in regard to the tariff or immigration or vivisection, did not efficiently serve the advertiser. The advertiser needed readers to be centralized; he needed the largest possible number of readers divided into the smallest possible number of groups — divided, that is to say, not according to what readers believe, but according to what readers are likely to buy. The advertiser has small interest to know whether the circulation of a newspaper is composed of Republicans or Democrats; but to pursue his advertising efficiently he must know whether the newspaper’s readers buy on the main floor or in the basement.
A half-dozen newspapers, one for each stripe of belief in the community, was good democracy. It was bad business…
Before the advent of television, magazines, too, served general readerships that supported big consumer brands. Local retailers and classified ads supported newspapers. That was the world Strauss decried.
Consumptionism is steadily substituting rich newspapers for poor newspapers, inclusive newspapers for clannish newspapers, forty-page newspapers for eight-page sheets. And what of it?
Is not this all advantage? Is not all that was of value still here? Fourteen editors in one place instead of fourteen editors in fourteen places: what has gone save waste, and inefficiency, and instability, and poverty? Has nothing gone then? Is the freedom of the Press nothing? For it is no less than the freedom of the Press that is going.
Why the freedom of the Press? In order to have a free Press, must we have a financially unstable, inefficient Press? How should it take from the freedom of the Press to have one strong newspaper with room for all prejudices and all preferences, rather than to have many weak ones with room for only one pet abomination and one passionate predilection? Should this make the difference between a newspaper Press with political influence and a newspaper Press with a rapidly diminishing political influence? Perhaps it should not: perhaps there is no logical reason why it should. But that is what it does just the same.
When TV arrived as a more efficient way of selling cars, Coke, and Campbell’s soup, general-interest publications like Collier’s found themselves out of business. Magazines survived only if they could identify and serve narrower audiences. At the very least they separated men and women, business and leisure pursuits—vertical advertising markets. With their local emphasis, daily newspapers remained oriented toward the broadest possible market. So did most TV.
Then the internet arrived. Classifieds disappeared and so local retailers. Department stores consolidate, leaving one or two where there used to be many buying ads. Subscriptions again became the way to profits. Now we’re again living in the media world for which Strauss was nostalgic—the world of “clannish” news that tells people what to think instead of “inclusive” news that only tells them what to buy.
This post went out to subscribers to my Substack newsletter on NNovember 13. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
I’m happy to share the winning entry in the hopeful futures contest. Inspired by an 1859 paragraph looking back of the previous eight years of progress, Mano Majumdar took up the challenge of looking back at today from 2030 (or perhaps 2032). He’ll receive a care package of “upwing” books, including a couple of my own. Here’s his self-description, followed by his winning entry:
Manosij 'Mano' Majumdar believes with conviction what Hamlet said with irony, and that is reflected in this piece of writing. His interests include New Urbanism, sustainable abundance, and oblique references to Star Trek. He is a management consultant by profession, after previous innings as a chemical engineer, a business lecturer, and an instructional designer. He lives a metaphorical minute away from the espresso café he co-owns in London, Canada.
History rhymes, or humans excel at finding patterns. The 21st century started disastrously, and continued steadily to find one near-death experience for civilization after another. From the Western vantage point alone—a singular act of terrorism, followed by two protracted wars, a once-in-a-century economic shock, a once-in-a-century global pandemic, culminating in a European conflict threatening to go nuclear. Humanity seemed to be reliving a speedrun of the early 20th century, heading towards some final breaking point at breakneck speed. Some almost welcomed it—as catharsis, as karmic comeuppance, as degrowth-as-secular-penance. There was resignation in the air.
That is what makes the last ten years a story of redemption. A number of quiet developments came to tipping points in the second act of the twenties, and coincided brilliantly. AI, neither the supplanting god nor the indifferent destroyer of early imagination, became a commonplace and accessible supertool, multiplying the speed of discovery and design across the economy. Tropical diseases that lay neglected for centuries now fell to affordable AI-designed molecules within months. Personalized education at scale broke the longstanding tension between quality and volume. There are some weeks when years go by, and years that do the lifting of decades. Accessible, no-code AI delivered a century's worth of human flourishing in years in places, uplifting billions from poverty, generating trillions in wealth.
As social media grew into the metaverse, political polarization actually softened—it turned out it is easier to remember the human when they are rendered photorealistically in 3D. It became a second front in education, allowing for deep simulations, heartfelt connections, and empathy. The metaverse created an emotional internet, an outcome difficult to extrapolate from its roots in soulless renderings firmly in uncanny valley. The access to third places and counselling-at-scale it provided revolutionized mental health more than we realize. There used to be hundreds of shootings that used to occur in the States every year; that there hasn't been one since 2028 is remarkable in that context.
And of course, energy! Gen Z now knows what Millenials felt like as the generation that lived on both sides of the Internet revolution. ITER, completed in 2026, reached engineering breakeven on the twenty-second of August 2029, a moment later celebrated as Eternal Light. By the summer of 2030, replicas were under construction in China, India, the United States, Australia, Japan, the United Arab Emirates, Botswana, and Iran. There is one in the metaverse, as a technical reference.
What was perhaps even less predictable was the Great Thaw in geopolitics. ITER couldn't solve for history, but it took energy (and by extension, water) off the board. In the same decade that Europe had been held hostage by heating oil, the Russian regime voluntarily surrendered Crimea to Ukraine in return for access to fusion technology. Fusion even broke the stalemate on carbon sequestration, with carbon levels in the atmosphere now already at 2018 levels thanks to competing Chinese and European gigaprojects. What used to consume the world’s attention suddenly became... solved.
Together, these three keystone technologies stole upon history and quietly resewed the ties of trust and trade that weave humanity out of nine billion people. Intelligence, empathy, and energy make us human—and increasingly, perhaps, something more....
The past week has not been what I expected, thanks to my latest face plant. Coming down some ill-arranged steps in Old Town Tustin, I missed the last one and landed flat. Although nothing was broken—miraculously including my lip—I seriously screwed up my right hand and have been unable to do much with it for the past week. And, no, I was not looking at my phone. If I had been, I might have seen that unexpected step. My hand injury cut seriously into the writing and weaving I had planned. And it made me worry about the future.
I suffer a serious fall at least once a year, despite sensible shoes and balance that tests as fine. (I can stand on one leg without trouble.) As I told my doctor, the problem seems to be that I’m easily distracted and have a poor sense of where I am in space. I shudder to think what will happen when my bones get brittle. The bruises are bad enough. Hence my desire for a footwear application of AI.
OTOH, given Stable Diffusion’s responses to variations of the prompt “woman falling face first down a step,” I’m not optimistic about putting my face in the hands (or feet) of AI.
The LAT reports on why scofflaw cities like Santa Monica can have their zoning powers temporarily stripped (as I discussed in this post and the related Bloomberg Opinion column). Zoning powers come from the state and what the state can give, it can also take away.
You’ve probably never heard of a mangle, but they used to be household necessities. Why they went away is part of the never-appreciated-enough story of laundry progress. I came across this great explainer, tapping the memories of a woman born in 1906.
AI cat costumes. I made this one with Stable Diffusion: “cute cat in princess costume.” It clearly understands cats better than people.
This post went out to subscribers to my Substack newsletter on November 3. Check out all the posts, including ones from the archives not reposted here, and subscribe here.
Inspired by then-President Trump’s call for a commission on “patriotic education,” in September 2020 Bloomberg Opinion asked its columnists to recommend books that might be appropriate for that cause. Here’s what I wrote:
A patriotic reading list should encompass founding documents, philosophical writings and political speeches. But even in the U.S., whose ideals of liberty and equality are essential to its nationhood, abstractions are not enough. National identity requires a sense of shared history and experience—a common heritage that goes beyond one’s own family and local culture. We need to see something of ourselves in our fellow Americans, present and past.
With that goal in mind, my choice is Willa Cather’s novel, “My Àntonia,” a story of settlers on the Nebraska prairie in the late 19th century. The title character is a girl whose family has immigrated from Bohemia. They arrive on the same train as Jim, an orphaned boy from Virginia coming to live with his grandparents. The adult Jim, a successful New York lawyer, narrates the story. Jim, whose experiences and attitudes mirror Cather’s, represents a cosmopolitan and great-hearted American spirit, formed on the prairie and at home in the wider world.
“He loves with a personal passion the great country through which his railway runs and branches,” says a friend in the chapter that frames the story. He raises capital for western enterprises and “is still able to lose himself in those big Western dreams.”
“My Àntonia” is often assigned to high school students—I first read it in 10th grade—but few teenagers can appreciate it. It’s a book for the experienced: a story of displacement and opportunity, of growth and progress inextricably bound together with loss. Particular in time and place, Cather’s stories, characters, and setting embody universal themes in an especially American way. “You are not the first to be uprooted,” she tells us, “to see the world around you change, to learn and grow and struggle to find your true path. You are not the first to seek hope in the new.”
In the genealogical sense, “My Àntonia” is not the story of me or my people. I hail from the rolling hills of the Carolina Piedmont and found my home in southern California. My family tells no immigrant stories. Though I’ve traveled the length and breadth of the U.S. and lived in Texas, Boston and the Delaware Valley, I’ve never set foot in Nebraska.
But ancestors alone do not a nation make, certainly not in the U.S. “My Àntonia” honors the people from many places who made the larger whole by knowing and tending local ground. Cather evokes the stark beauty of the prairie in quietly magnificent prose. Yet unlike some localist literature, “My Àntonia” doesn’t celebrate staying put. Some characters remain on the prairie, some migrate, some return. The novel doesn’t treat leaving as a betrayal. It’s too American for that.
With all due respect to Mark Twain, you can make the case that Cather is the great American novelist. She’s certainly the great novelist of the American West, with its mixture of cultures and landscapes. The West represents an extreme version of what Daniel Boorstin called a “fertile verge,” where creativity springs from encounters of difference: geographic, political, technological, cultural, or generational.
In a recent NYT Book Review interview, Ken Burns agreed: “One writer I love is Willa Cather. People say, Was it Melville or Hemingway or Twain who wrote the great American novel, meaning “Moby-Dick” or “A Farewell to Arms” or obviously “Huckleberry Finn,” where, as Hemingway rightly said, American literature begins. But what about “O Pioneers!” or “My Ántonia”?” (He also gave a shoutout to the great science fiction writer Roger Zelazny, with the sad note that “now I can’t even find his books on a bookshelf at a reputable bookstore.”)
In my ambition class at Chapman, we taught Cather’s The Song of the Lark, about a girl from a small town in Colorado’s journey to become a world-class opera singer. We followed it with the 2020 film Minari, about a Korean-American family that moves to rural Arkansas so the father can pursue his dream of farming. He wants to raise Korean vegetable for the country’s burgeoning immigrant population. Set in the 1980s, it’s based on creator Lee Isaac Chung’s childhood experiences. Although the two works have some thematic overlaps—I thought of this part of the course as “the American dream” section—I only discovered a more direct connection when doing online research in preparation for teaching Minari.
During last year’s awards season, Chung published an op-ed in the LAT that explained how the film came to be:
In early 2018, my journey as a filmmaker seemed to be closing. For the sake of my family, I decided to take a full-time teaching job to join the ranks of responsible workers, and this meant I had a few months to write one final script before my job began. Sitting in my regular coffee shop in South Pasadena, at the same table where I had schemed and planned my battles for years, I felt desperate to try a new approach. I closed my computer, shut my eyes and decided to take seriously whatever I heard in return. After a time, two words came to me, and they were clear only because they were so unfamiliar: “Willa Cather.”
First, let me clear up that this isn’t normal for me. I don’t sit around hearing voices, but on this occasion, my ears rang with “Willa Cather,” and I needed to investigate what this meant. Was she an actress? A historic figure? I’m embarrassed to admit that I had to search online to learn that Willa Cather is one of our greatest novelists. I came across an additional detail that seemed to be the key to the mystery; Willa Cather died in 1961, and her work was entering public domain.
Chung went to the library and checked out My Ántonia, choosing it because it had the most copies among Cather’s works. He considered making it into a movie, but decided not to after learning that Cather didn’t want any movies made about her work. (She had a terrible experience with A Lost Lady.) Instead, he began researching her life for a possible biopic and found many parallels with his own. Ultimately, he drew on something she said about her own work: “Life began for me, when I ceased to admire and began to remember.” He began to write down his own memories. Minari was the result.
If you’ve never read Willa Cather, or only done so under school coercion, I highly recommend dipping into her work. In addition to the print editions, there are good audiobooks of her primary novels.
Now will someone please invite me to speak in Nebraska? I’ve only been there on Zoom.
Since August, I’ve been teaching a first-year seminar at Chapman University titled “Ambition and the Meanings of Success.” (As a separate post I’ll send out a copy of the syllabus.) Our students are, generally speaking, not an ambitious lot. They believe in the “balanced life,” and few express sympathy for ambitious characters in works like The Song of the Lark and Minari. They tend to judge their sacrifices as character flaws and their triumphs as not worth the pain.
Whether this lack of ambition reflects a generational norm or the comfortably affluent backgrounds of most Chapman students, I don’t know. I suspect it’s a mixture of both. This poster for a candidate for student government captures the dominant ethos. (I don’t know how successful the candidate was.)
For the ambitious professors—my co-teacher Sean Crockett is not just a successful economist but a serious wildlife photographer—the course offers ample opportunity for introspection. “I am Thea,” I said to Sean about the protagonist of The Song of the Lark. I’ve always been intensely ambitious: academically, professionally, and intellectually.
When I was applying to college, the most substantive interview I had was at Davidson. There, among other probing questions, the admissions officer asked me what my goals were in life. I was unprepared for the question and amazed at the quickness with which I could answer it off the top of my head. I knew:
So I can measure my success against my 18-year-old self’s goals in life. Mission accomplished.1
I’m also at the stage in my life where I long ago confronted my disappointments and limitations. Substack is full of insightful punditry from people who think and write much more quickly—or at least in a more disciplined fashion—than I do nowadays. And I’m no longer interested in writing of-the-moment commentary. The more things matter, the less a columnist can affect them. Columns are ephemeral. Dropping out of the columnist game is supposed free me for more ambitious writing—no more finding “easy” (they never really are) topics to feed the beast. But the danger is that it also frees me to fritter away my time.
“Virginia is like a bee, going from flower to flower,” my undergraduate thesis adviser once said to Steve. It was a reference to Castiglione’s Book of the Courtier: “Just as in the summer fields the bees wing their way among the plants from one flower to the next, so the courtier must acquire this grace [sprezzatura] from those who appear to possess it and take from each one the quality that seems most commendable.”
It’s an apt description, for two reasons. One is that I’m eternally restless, “always roaming with a hungry heart,” as Tennyson’s Ulysses describes himself. The other is that for me, success is as much about what I learn as what I make or do. Or at least that’s what I tell myself. The truth is I also care about making a mark—production as well as consumption. I want to share what I learn, whether it’s something I get from others (as in The Fabric of Civilization) or figure out myself (as in The Power of Glamour).
So how should I spend my time? What are my current ambitions? How do I keep my many ideas from spinning around so much that I do nothing substantial? It helps to have some commitments.
It’s not an answer to the biggest questions, but I’m delighted to say that I’ve signed on as a Contributing Editor to Works in Progress, the London-based publication for whom I wrote this article on the evolution of polyester. Through 2023, I’ll be writing several features, editing a special issue, and spearheading the programming at a special event. Details to follow.
Someone recently asked me how I evaluated the success of my books. It’s not an easy question. One answer is the substance of the books. I believe that each has added something new and significant to the world of ideas and done so in a way that is pleasurable to read. I learned a lot from the research and thinking it took to write them, and I’m proud of the results. A more objective measure of success is that they’ve all stayed in print, continuing to sell after the initial attention died down. People still discover and share them. They are still part of the relevant conversations.
Then there are the actual numbers. Back in September, you may have read the claim that “50% of books published sell fewer than 12 books,” a number derived from discovery in an antitrust case challenging the proposed merger of two major publishing houses, Penguin Random House and Simon & Schuster (publisher of two of my books). This claim is not true, as explained in this excellent post and a couple of knowledgeable comments.
Book sales are complicated, especially in aggregate. Do you lump together sales of adult coloring books, self-published family histories, wonky think tank volumes, and the latest Stephen King novel? If so, do the numbers mean anything significant? Further complicating the question, as my latest royalty statements demonstrate, is that what the author and reader think of as the same book can be several different ones statistically. Each different format—hardback, paperback, ebook, audiobook—counts as a separate item.
But it’s safe to say that even from major presses most books sell relatively few copies. Like moviemaking, publishing is what economist Art DeVany calls “a business of the extraordinary.” It loses money on most of its products and makes it up with a few big, often surprising, hits. “Roughly 66% of those books from the top 10 publishers sold less than 1,000 copies over [their first] 52 weeks,” writes Kristen McLean, who crunched the numbers for NPD BookScan after the questionable “12 books” stat started making the rounds. (For details, see her comments here.) In the scheme of things, therefore, I qualify as a successful author. My books aren’t best-sellers, but they’re pretty far out in the upper tail.
What does that mean in numbers?
It’s royalty statement season, when I get semi-annual statements on my book sales and payments for those that have “earned out” their advances. A publishing contract specifies the royalties an author gets for each book sold. The money the publisher pays the author to write the book is an “advance against royalties.” It’s usually divvied up in installments of, say, a third on signing the contract, a third on approval of the submitted manuscript, and a third on publication. As long as the manuscript meets with the publisher's approval, the author keeps the full advance. The publisher is therefore taking a calculated risk on the book, guessing how well it might sell while also providing financing for writing it.2
As the book sells, the royalties specified in the contract are counted against the advance. If the total reaches the amount of the advance, the author begins to get royalty checks.
Not surprisingly, my books that generate royalty checks are the two for which I got the smallest advances: The Future and Its Enemies (I recently deposited a royalty check for ~$85) and The Fabric of Civilization. Although it has sold the most copies, The Substance of Style will never earn out its generous advance, which was my little slice of turn-of-the-century “irrational exuberance.”
The Power of Glamour has never taken off like my other books. Disregarding good advice about pitching articles to advertising trade publications in favor of organizing fun parties, I did a terrible job marketing it. I also think the title sets up the wrong expectations. (I wanted Decoding Glamour or Glamour Decoded.) Although it’s a beautiful object and written in an accessible way, it’s the book that does the heaviest intellectual lifting, constructing an original theory of glamour as a form of visual persuasion. If you’re expecting fun fluff, you’ll be disappointed. And if you want rhetorical theory you may not pick it up because you think “glamour” means “fashion.” Yet even that book’s disappointing numbers put it in the upper tail of book sales.
As of June 30, here are the totals, excluding translations:
The Fabric of Civilization (2020) 27,110 (not including the audiobook, for which I don’t know numbers)
The Power of Glamour (2013) 8,112
The Substance of Style (2003) 43,486
The Future and Its Enemies (1998) 34,716
Buy the books from Amazon by using the links!
1 A few years later, I acquired the more specific ambition of becoming editor of Reason magazine. Through a remarkable set of coincidences, I managed to achieve that goal as well—although I fell short of my ambitions for the role.
2 Whether that financing is sufficient depends on the amount and the book. My advance for The Fabric of Civilization did not cover the costs, but I fortunately got a grant from the Alfred P. Sloan Foundation’s Program for the Public Understanding of Science, Technology, and Economics.
In August, I intervened to inject some local policy knowledge into a kerfuffle over venture capitalist Marc Andreessen’s hypocrisy on housing construction. Famous for his 2020 manifesto “A Time to Build,” Andreessen wouldn’t seem a likely NIMBY. Yet there he was, along with his wife, filing a public comment to oppose a modest amount of multifamily housing construction in his ritzy suburb of Atherton.
It looked bad. It deserved rebuke. And it didn’t matter one bit.
As I wrote in a Bloomberg Opinion column:
Under a law passed in 1969, two years before Andreessen was born, every eight years California cities have to project the future demand for housing in several income tiers and specify where those homes might be built. The long, complicated and expensive ritual has produced many hearings and documents but not much housing. It offered too many loopholes.
Cities could lowball the numbers. They could identify theoretical sites in their plans but, when later faced with a real development proposal, impose delays and restrictions that required scaling down the project, increasing the sales prices or rents, or abandoning the whole thing.
“Housing element” plans didn’t have to make sure the owners of prospective sites were willing to sell. As long as cities went through the right motions, they faced no consequences for obstructing new housing.
Now they do.
California has toughened its approval process for the housing-element plans, and cities face fines of up to $600,000 a month if they don’t come up with an acceptable plan. The state can review at any time whether the city is complying with its promises. If not, it can require streamlining development permissions to keep those commitments.
Cities that fail to meet their obligations face fines of up to $100,000 a month. They can lose state funding. The state can even suspend their power to regulate land use.
That article—read the whole thing on my website here—discussed several different policy reforms that will bring more housing to Atherton. But the “housing element” hammer is what I want to focus on here. It’s coming down hard on cities in my own area. The threat to suspend land-use regulations isn’t theoretical.
Cities that don’t meet their deadlines face something known as the “builder’s remedy,” where the state suspends local zoning laws until the city comes up with a suitable plan. Any development that files for permission during the suspension gets a free pass. That’s what has happened in two of the most housing-short coastal cities in Southern California: Santa Monica and Redondo Beach.
In Redondo Beach, a developer filed plans to convert a power plant slated for decommissioning next year. The plans envision a complex including residential towers with 2,290 units, along with office, commercial, and hotel space. It’s the kind of large-scale projects current residents hate. Another 5,000 people might be able to live in Redondo Beach! The application was filed in August. The city had submitted a Housing Element plan in July but hadn’t yet received approval. Redondo Beach’s Housing Element plan gained state approval a month later. Litigation will undoubtedly ensue.
Meanwhile, in Santa Monica a dozen projects with nearly 4,000 housing units slipped in while the city’s zoning ordinances were suspended. They include a 15-story 2,000-unit building in a low-rise, mostly light-industrial area near Santa Monica’s eastern border with L.A. Nearby businesses include the headquarters of Riot Games. Santa Monica is a major employment center whose housing hasn’t kept up with its job creation, leading to major traffic jams heading to the freeways. As a resident of West L.A., just east of the 405, I feel the effects personally. Unless I can walk to my destination, I generally avoid going west of the 405 after 3:00 p.m. because coming back even a couple of miles can easily take 45 minutes. No afternoon eye doctor appointments for me.
A couple of weeks ago Matt Yglesias had a good Substack post looking back on a decade of the YIMBY movement. It traces some of the intellectual origins of the successful efforts to loosen regulations on housing, focusing mostly on the activist world. I’d ding it for giving short shrift to the academics at UCLA, USC, and Berkeley, whose empirical research has been important to understanding what the obstacles to new housing, particularly in California, are. Also important are Ed Glaeser at Harvard and Joe Gyourko at Wharton, whose housing research I first wrote about way back in 2002 and revisited in 2007, when I didn’t quite realize just how bad a mess California’s restrictions had made.
I have evidence from as far back as the seventh grade demonstrating my obsession with housing policy. In 1987, when Matt Yglesias was in elementary school, I published a WSJ op-ed titled “Tapping the Shadow Housing Market,” which I’ve just dug up and added to my website. It anticipates the recent reforms loosening parking mandates and restrictions on “accessory dwelling units.”
Unlike many of the housing-obsessed, I’ve never had the inclination to impose my personal lifestyle preferences on everyone else. I prefer dense, urban neighborhoods but defend the suburbs. I believe that building will follow demand and prices will send the right signals, resulting in a mixture of housing that reflects a mixture of tastes, budgets, and lifecycle stages—if, that is, regulators allow housing markets flexibility. Count me in the Market Urbanism camp, not the “everyone should ride a bike” school.
When I moved to L.A. in 1986, one of the things I most appreciated was the abundance of housing. I want today’s young people to feel the same possibilities I felt back then, including the chance to have their own space. And, like Matt, I think things are looking up:
Overall, though, I think the future is bright. Ten years ago, housing reform was considered a bizarre niche issue that nobody in the audience cared about and where change was politically impossible. Now, it’s a mainstream topic of discussion with real political champions, local activist groups around the country, and a blueprint for state-level change. More and more people also acknowledge these days that housing is just so central to the economy that you can’t treat it as a tiny quirky obsession of urbanists — anyone who cares about sustainable growth needs to care about housing supply.
How Substack got started. I read the free version of Sinocism, which is excellent. If I had a more direct professional interest in China I would definitely pay for a subscription.
Robert Graboyes, an economist with wide-ranging interests that overlap with mine, has a good Substack called Bastiat’s Window. Most of it is pretty serious. But this post about celebrity encounters made me laugh several times.
I think about breast cancer every October, and not because it’s “Breast Cancer Awareness Month,” which I find some mixture of ridiculous and distasteful. I’m all for raising money for breast cancer research and treatment. But making people “aware” by slapping pink on everything from the water in public fountains to specials at Dollar Tree doesn’t do much to save lives. When someone’s selling “breast cancer awareness” tchotchkes, any contributions won’t be more than their profit margins and quite likely less. (See this 2015 Business Insider analysis of how little money from the NFL’s breast cancer merch makes it to the cause.)
It’s not the orgy of pink that reminds me of breast cancer. It’s the Nobel Prize in Physiology or Medicine. I have a rooting interest, and so far I’ve been disappointed. I want the prize to go to UCLA cancer researcher Dennis Slamon, who in recent years has been on the Great Mentioner’s short list (an improvement since I started paying attention a decade or so ago).
Slamon’s work did two things: Beginning with HER2+ breast cancer, it demonstrated that cancers could be identified by specific genetic variants, rather than merely where they occur in the body. Then it showed that those variations could be targeted and treated with specific antibodies. The first practical result was the drug Herceptin, which treats the roughly 25 percent of breast cancer patients with an especially aggressive form. From a recent UCLA profile:
The key finding by Dr. Slamon and colleagues showed that the monoclonal antibody Herceptin binds to, and destroys, abnormal cells without harming nearby healthy tissue, much like a laser-guided missile hitting a select target. This was a major departure from then-common chemotherapies that Dr. Slamon refers to as the “hand grenade” approach, indiscriminately killing healthy as well as diseased cells. Proving that antibodies that bind to cancerous cells are an effective method for treating solid tumors transformed cancer care at a time, in the 1980s, when most cancer therapies were focused on excising tumors and developing better chemotherapies. The discovery opened up new research avenues, leading to multiple other targeted treatments that utilize antibodies to attack the disease at its genetic roots. Between 2.7 million and 3 million women have been treated with Herceptin, and women with HER2-positive breast cancer now have among the highest survival rates compared with all women with breast cancer.
Here’s a video that explains further (and includes researchers Axel Ullrich and H. Michael Shepard, who might share in the same prize, as they did in the Lasker award sometimes called the “American Nobel”).
If U.S. scientific research were more “efficiently” funded, none of this research might have happened. After the scientific triumphs of World War II, Vannevar Bush, who had directed the wartime Office of Scientific Research and Development, proposed the establishment of a similar peacetime organization through which all science funding would flow. After some legislative iterations, we got the National Science Foundation, but it has no monopoly even on federal government funding. In addition to the obvious National Institutes of Health, money flows from assorted pockets of the Defense Department, the Energy Department, and more. On top of that are many philanthropic foundations, including heavy hitters like the Howard Hughes Medical Institute, the Bill & Melinda Gates Foundation, and the American Cancer Society. To a good technocrat like Bush, it would seem like a disorganized mess.
Back in the 1980s, Slamon amassed a collection of different kinds of cancer tumors removed from patients, believing that analyzing their cells’ molecular biology could unlock the mystery of why the cancers spread. UCLA wasn’t a major cancer-research institution, Slamon was a nobody, and his idea was decidedly out of the mainstream. You can imagine how his grant proposals were received. When he applied for an NIH grant to fund the tumor bank, he says, “It came back with a laugh track.”
Fortunately Axel Ullrich, then at Genentech, gave a seminar at UCLA, presenting his work on growth-regulating genes. He agreed to give Slamon samples of the DNA he’d identify to test against the tissues in the tumors. To do the painstaking work on a low budget, Slamon hired a UCLA freshman named Wendy Levin:
Now a physician, Dr. Levin is an oncologist in San Diego County, but while an undergraduate, she spent nights and weekends “sometimes sleeping on the floor in the lab,” she says, extracting DNA from tumors. It was tedious work, taking a piece of tumor tissue that had been frozen in liquid nitrogen, grinding it up, extracting the DNA and looking at one gene at a time for something awry. But the work bore fruit on a Saturday afternoon in June of 1986, when she found a match between the HER2 gene and a breast cancer tumor. “My heart started thumping,” Dr. Levin says. “It was a true eureka moment.” She excitedly called Dr. Slamon at home, offering to drive out to his house to show him the results. Dr. Slamon decided it would be OK to wait until Monday.
The usual sources were still not interested in paying for research. But in 1989, Slamon was treating Hollywood honcho Brandon Tartikoff, best known for his stint as president of NBC, for Hodgkin’s lymphoma. Tartikoff’s wife Lilly was grateful for the care and asked Slamon what she might do to help him. He told her about the idea of finding a drug to treat HER2+ breast cancer. Soon thereafter, in a classic Hollywood moment, she ran into Ronald O. Perelman, who owned Revlon, at Wolfgang Puck’s original Spago restaurant. She gave him the pitch: You own Revlon. Revlon sells to women. Women get breast cancer. You and Revlon should support this research. He agreed to let his representative meet with Slamon.
At the meeting, Slamon was accompanied by his colleague John Glaspy, who is a notably blunt-spoken person. Even if they got government funding, Glaspy warned, it would take several years and by then “we’ll have a Rose Bowl full of dead women” from breast cancer. The pitch worked. As Mona Gable recounted in a 1998 article for UCLA Magazine:
Perelman not only came through for Slamon, but he made an astonishing offer: $800,000 a year for three years, a total of $2.4 million. As support from an American corporation to a single scientific group, the gift was virtually unprecedented. Just as amazing, the research funding was unrestricted. Slamon could use the money however he saw fit.
“It would have taken four concurrent National Cancer Institute grants to build the equivalent of the program Revlon funded with just the stroke of a pen,” Slamon says intently. “And there was no writing a grant, submitting it, waiting eight to 12 months to hear. This gift allowed us to follow our leads almost instantaneously, and made a huge difference in this whole story.”
Even with Revlon’s help—and money from one of those Pentagon pockets—the development of Herceptin was such a difficult journey that it became a Lifetime movie. Genentech was a young company with little margin for error and several times threatened to cancel drug development.
In 1998, the drug was approved for treatment of Stage 4 HER2+ breast cancer and in 2006 it was approved for treating early stage cancers. A year later, it saved my life. As I wrote in the acknowledgements in The Power of Glamour:
In July 2007, barely a week after receiving the final signed contract for the book, I was diagnosed with what turned out to be HER2-positive breast cancer, a particularly aggressive form of the disease. Twenty years earlier, I would have had only a fifty-fifty chance of survival, given the details of my case. Today, I am officially cured. Although I underwent the traditional treatments of surgery, chemotherapy, and radiation, what made the crucial difference was the pathbreaking biologic drug Herceptin, first approved by the Food and Drug Administration in 1998.
The research that led to Herceptin was funded not by the federal government or a traditional cancer charity but by money from Ronald O. Perelman, in his role as chairman of Revlon, and by fundraising in the 1990s at a series of star-studded events called the Fire and Ice Balls. I am deeply grateful to the many people, only one of whom I know personally, responsible for bringing Herceptin to the world: to Dennis Slamon for his scientific vision; Lilly Tartikoff for her fund-raising energy; my oncologist, John Glaspy, for his persuasive eloquence; the researchers at Genentech for development and testing; and Perelman and Revlon for their financial contributions. In a very real way, I owe my life to the glamour of makeup and movie stars.
As the ever-frank Dr. Glaspy told me, “You’re cured, and if you ever get breast cancer again, it won’t be the same cancer.”
The image above is Tony Morley’s entry into my contest asking for visions of a positive future. Tony used the AI image engine from the team Midjourney with the prompt “Metropolitan skyline of a city in North Africa.” He writes that it represents, “A future where the newborn daughters and sons in Mali and Niger have the same life expectancy and living standards as those of the United Kingdom.” Follow Tony’s explorations of progress on Twitter. The deadline for entries is October 31 and the contest is described in this post. Please spread the word—and enter!
Speaking of AI image engines, I was inspired by the dismal landscape at the top of this post on nuclear power by Jim Pethokoukis to wonder why cooling towers (nuclear or otherwise) can’t be painted to look cheerful. That led to some fooling around with image generators, primarily Stable Diffusion. Attempts to create images of cooling towers painted with murals confirmed something that my long-ago Harvard Extension Hebrew teacher said: “Prepositions are the hardest part of a language.” Stable Diffusion had a tough time understanding that asking for a mural on a cooling tower is not the same as asking for a mural in a cooling tower or a mural of a cooling tower. (A few days later I saw this Astral Star Codex post, from which I learned that the difficult preposition problem is called “compositionality” in AI circles.) But I did get a few reasonable results.
Imagine the murals people with actual artistic skill might come up with. Maybe the intimidating dull gray of cooling towers reflects technical issues that water towers—like the famous Gaffney peach—don’t face. But finding a way to make cooling towers look friendlier would go a long way to improving nuclear power’s public image.
In the revitalized (now web-only) Print Magazine, Steven Heller interviews Andrea A. Trabucco-Campos and Martín Azambuja, the designers behind a project using AI prompts to create typography in the styles of iconic artists. They’ve recently published some of the results in a book titled Artificial Typography. From the interview:
Is “artificial” the best way to describe this material or is there another word that better describes what you are doing?
Artificial Typography is an obvious wordplay we loved as a book title due to its immediacy and relation to Artificial Intelligence (AI). It also lends itself to a series of books we’re thinking about publishing in the future, such as Artificial Architecture, Artificial Objects, etc.
Beyond that, “artificial” is an accurate way of describing these images if we consider the mode of creation and how this word is commonly understood: something not existing/originating in nature, rather produced by human processes. It is doubly removed from us since it is not just created by our hands with tools (think a letter drawn by a pen, brush or Illustrator), but instead produced semi-autonomously by machines and systems we’ve created.
Initially, we were also enamored by the idea of “conversation” and played with it as a title. The exchange that happens with AI machines is a form of conversation, and perhaps one of the most intellectually satisfying visual-verbal connections that have been devised between human and machine. As mentioned, you feed it a text prompt and through a simple string of words and their order, the AI system generates images that are sometimes unexpected, sometimes weird, sometimes ugly, and quite often stunning. You then can keep on iterating on the images or versioning the ones that are most successful, as well as tweak the text prompt and learn how the AI reacts to subtle or drastic shifts.
I like this notion of “conversation,” which captures the iterative process as well as the nature of the best conversations. You have a general idea in your head and you try to express it in words. The system in effect says, “Do you mean this?” And the back and forth proceeds, with the two of you trying to come to a common and pleasurable understanding that makes something new in the world.
Even in their current limited state, AI image generators are changing how we work with images in a visually saturated culture. The better image generators get, the more we’ll use them—and the more we’ll need to remember that seeing shouldn’t necessarily be believing. (The negative hot takes practically write themselves.)
Will Knight, who covers AI at Wired, has written several interesting articles on image generators, including one making the point that they can enhance human creativity.
People who have been granted early access to DALL-E have found that it elevates human creativity rather than making it obsolete. Benjamin Von Wong, an artist who creates installations and sculptures, says it has, in fact, increased his productivity. “DALL-E is a wonderful tool for someone like me who cannot draw,” says Von Wong, who uses the tool to explore ideas that could later be built into physical works of art. “Rather than needing to sketch out concepts, I can simply generate them through different prompt phrases.”
I could imagine image generators making those supposedly worthless art history degrees hot commodities. Although Stable Diffusion understands what the style of Keith Haring is, it has a much harder time grasping what makes Carlo Crivelli so weird and distinctive. (It’s not the Renaissance architecture and clothes.)
My latest article is a review of W. David Marx’s Status and Culture for the WSJ’s fall books issue. Here’s an excerpt:
“Status and Culture” is blessedly free of the moralizing that so often mars analyses of status. Mr. Marx recognizes that status and status-seeking are human universals: “All status symbols rely on objects and behaviors with practical or aesthetic value that enrich our lives,” he writes. But the book often feels anchored in the second half of the 20th century, when the Beatles, Pop Art, and preppy style were salient examples and mass media essential to cultural diffusion. It doesn’t reach back to, say, the Italian Renaissance to more fully test its theories. Only in the final chapter does it begin to explore our own “era of vast quantities, deep specificity, and breakneck speed, where few individual artifacts, artworks, or conventions leave a dent in society or bend the curve of history.”
In today’s sea of instantly available, constantly ranked cultural production, Mr. Marx argues, everything and nothing has cachet. The result, he worries, is to “debase cultural capital as an asset, which makes popularity and economic capital even more central in marking status.” In some ways, the world he describes sounds like the 1950s, with the culture of TikTok as the new mass media, and “keeping up with the Joneses” measured in likes.
Now, however, individuals with specific passions and tastes can find the things they value far more easily. “We live in a paradise of options, and the diminished power of gatekeepers has allowed more voices to flourish,” Mr. Marx acknowledges. “The question is simply whether internet content can fulfill our basic human needs for status distinction.”
The book is a nice introduction to the literature on status—the bibliography is remarkably comprehensive—but, as those who’ve read The Substance of Style might predict, I have some disagreements with Marx’s single-variable explanation.
And now, the news….
After more than 11 years as a contributor, I’m leaving Bloomberg Opinion. When I first joined the then-new Bloomberg View in May 2011, I figured it might survive three years. Happily, it’s still thriving. But we’re parting ways, as I’m want to pursue more in-depth writing opportunities, including idiosyncratic history- and textile-related projects, and they want to focus on a tighter group of frequent contributors. It’s been a good ride: great editors, freedom to choose topics (within the constraints of a news-pegged column), and old-fashioned pay. Here’s hoping they forget to cut off my access to the other side of the Bloomberg paywall…
Assorted links
The case for abolishing the National Environmental Policy Act: It’s a strong one, made by Jeremiah Johnson at Liberal Currents.
Everything you never knew you wanted to know about Baltic birch plywood, by Anna & Kelly Pendergrast at The Prepared.
Excellent Q&A with Scott Lincicome about industrial policy, interviewed by Jim Pethokoukis (podcast and transcript). Excerpt:
What if the policy was, “Here’s how we’re going to deal with climate change: We need to pull carbon from the air”? Carbon removal technology is something that doesn't really exist right now, other than in some very experimental forms. “We're going to fund it, just like Apollo, just like the Manhattan project.” Would you favor something like that, assuming you thought there was the actual need to pull carbon from the sky?
The industrial policy approach is that we need that carbon capture technology to be made by Americans in America. And not just deployed by Americans; we need it made in America. Whereas the more free-market approach would be a prize: We don't care how it's made. We don't care who makes it, with a few security-related exceptions. If tomorrow the Korean government or Samsung or whatever comes up with the most amazing carbon capture technology in the world — it's like Mr. Fusion from Back to the Future, you just slap it on a power plant and suddenly we're zero emitters — you win the prize. We don't care that it was made by a Korean company. We don't care that they are going to be Korean jobs and not American jobs. No, the industrial policy side says, “We care a lot about who makes this stuff and that it's made in America, using American materials.” The pandemic, for all of its terribleness, provided us a pretty good example of the industrial policy approach to pandemic stuff and the market approach. And that's in the vaccines. The more free-market approach, essentially a prize but a procurement contract, was we went to Pfizer and BioNTech, and if you look at the contract for those vaccines, it said we have nothing to do with your supply chain. “We don't care how you do it. We don't care what you do. Just get an FDA-approved vaccine and we are all in, we're going to pay.” That's it. There are clauses in that contract that literally say we will have no control over how you make this whatever. A ton of global collaboration, of course. BioNTech is a German company, blah, blah, blah.
Several people have sent me this Scientific American article about Michèle Hayeur Smith and her work on what Viking textiles tell us about women’s roles. Careful readers may recall that I drew on her work in chapter five of The Fabric of Civilization, where I wrote about textiles used as money in Iceland, as well as China and West Africa. She has since published a book, and I was able to enlist her to give a talk to the Southern California Handweavers’ Guild, which you can see on my YouTube channel (please subscribe to my channel and watch my videos so that my numbers get high enough for YouTube to share the money from those annoying ads):
]]>
As I’ve said here before, I’m a big fan of Jim Pethokoukis, and I highly recommend his Faster, Please! newsletter. I’m also a fan of Adam Thierer and his arguments for the importance of “permissionless innovation” and “evasive entrepreneurs.” So I was happy to read Jim’s recent Q&A with Adam—except for one little thing. It starts with Jim’s favorite cultural obsession, nostalgia for 1960s science fiction: “The 1960s was full of optimistic sci-fi, most notably The Jetsons and Star Trek. Does the fact that the '60s were followed by the pessimistic 1970s show sci-fi simply doesn’t matter?”
In the Postrelian tradition of attacking my allies’ arguments—I call it taking them seriously, while my husband calls it stabbing people in the front—please allow me to make a few points about this fixation (not just by Jim) on 1960s pop s.f. and recent dystopian works.
1) The Jetsons was not science fiction any more than The Flintstones was archeology. It was, like its Stone Age partner, a midcentury family sitcom—I Love Lucy/The Honeymooners/Father Knows Best with different backdrops and dumber jokes.1
The commentary (such as it is) about technology mostly consists of complaints about devices breaking down and costing too much. Automation also means George and Jane Jetson do nothing all day except push a few buttons. If real, their lives would incredibly boring. (The Feminine Mystique was a bestseller for a reason.) The show is definitely not Star Trek.
The Jetsons is graphically appealing, but it only works because we don’t take it literally as a portrait of the future. The Jetsons live in a world without trees, grass, or privacy. Anyone in a flying car can peer straight into their windows, which also appear to be open all the time. People live in the sky for no reason other than it makes for cool drawings. You can’t take a walk around the neighborhood. Ever wonder, What’s on the Ground in The Jetsons? (Spoiler: “Homeless people and walking birds.”)
2) Star Trek’s fundamental appeal was not about the future or technology per se. The show portrays a setting in which smart people have new experiences and learn new things, solve important problems, and forge deep friendships. Nobody worries about money or office politics. The show’s values are humane. Everyone’s job is important and the boss deserves respect. As I learned in a big survey I did while researching The Power of Glamour, for many of its fans Star Trek represents an ideal workplace.
Star Trek’s vision of a nerd-friendly universe made the future glamorous, but only to the select few for whom that vision resonated. When originally broadcast Star Trek had lousy ratings. Most people didn’t find it especially appealing.2
Its pop culture success dates to syndicated reruns in the 1970s, which is when I saw it. (The first fan convention was in 1972.) By then, its New Frontier spirit, complete with Cold War analogies, was already out of step with the times. The show attracted fanatical devotion partly because popular culture offered few (no?) other celebrations of earnest nerds and their values.
3) Dystopias are far from Hollywood’s main products. I personally worry more about the ubiquity of pharmaceutical company villains and complex government conspiracies. (Did you ever see Scandal?) But I understand why tech horror obsesses D.C. policy wonks. They look for movies about A.I. or climate change or fill-in-the-dystopian-blank and find plenty of evidence of anti-technology attitudes infecting the culture.
But Hollywood’s biggest movies are not dystopias. You may have heard of the Marvel Cinematic Universe. It’s kind of big. It’s also technologically optimistic science fiction. Tony Stark! Wakanda! The Pym Particle! Yes, sometimes you get Ultron, but if you think Hollywood is only serving up technological gloom and doom you are definitely not reading Variety.
Meanwhile, on the prestige side, there are movies like Her (2013), Arrival (2016), and Everything Everywhere All at Once (2022). All have heart, as well as a nuanced and non-negative view of technology. And I’d argue that the future of A.I. is likely to be improved by the existence of thought-provoking movies like Ex Machina.
4) The Graduate, released in 1967, was a contemporary of the original Star Trek. A better question to ask about popular culture and the pessimistic turn is why this scene was so powerful. What made audiences find this career advice creepy and ridiculous? Not dystopian science-fiction movies.
5) In chapter three of The Future and Its Enemies I adopt a maxim from Henry Petroski to explain the open-ended nature of progress: “Form follows failure.” To quote the book:
Far from a utopian concept, this sense of progress acknowledges that life is not perfect, that any improvement requires ingenuity and work, and that different people have different notions of what constitutes a “better” idea. “Form follows failure,” is how civil engineering professor Henry Petroski, whose popular books explore the histories of such mundane objects as zippers and forks, sums it up:
The form of made things is always subject to change in response to their real or perceived shortcomings, their failures to function properly. This principle governs all invention, innovation, and ingenuity; it is what drives all inventors, innovators, and engineers. And there follows a corollary: Since nothing is perfect, and, indeed, since even our ideas of perfection are not static, everything is subject to change over time. There can be no such thing as a “perfected” artifact; the future perfect can only be a tense, not a thing.
As soon as we have something that improves over the past, we see what’s wrong with it. Unalloyed cheeriness doesn’t drive progress. Dissatisfaction does. What’s true for “made things” is also true of social and cultural artifacts and practices. One generation’s accomplishments look like unsolved problems to their successors.
The “plastics” scene in The Graduate isn’t about polymers. It’s about a young, economically privileged generation feeling trapped into pursuing inauthentic lives. To a man who lived through the Depression and World War II, the prospect of security in a growing, high-tech industry is enticing. To Dustin Hoffman’s Benjamin, it’s horrifying. He doesn’t know what he wants, but he know it isn’t a job at Dupont.
When I saw The Graduate more than a decade after it came out, I didn’t find it compelling. But if you’re concerned with preserving technological and social dynamism, you have to take seriously the discontent the movie represents. The Graduate didn’t create that discontent. It reflected it. As I wrote in this essay:
In a liberal order, however imperfect, the competition, criticism, innovation, and open-ended pursuit of better ways of doing things that characterize economic dynamism also give rise to cultural dynamism. Free individuals exercise voice and exit. They use what I’ve called “criticism by expression” and “criticism by example”—otherwise known as complaining and entrepreneurship—to shape new norms and institutions. And since the culture and the economy are not, in fact, separate spheres, the two forms of dynamism affect one another.
Culture is just as complex, dynamic, and unpredictable as science, technology, or markets—and just as driven by discontent.
In 2014, I wrote a Bloomberg column on these issues, which I will send out as a separate “From the Archives” post.
Odds and Ends
The Fabric of Civilization is on sale for $3.99. At that price, it’s worth buying even if you own a print copy, just in case you want to search it.
Also, here’s the periodic reminder that the references for The Fabric of Civilization are online here.
>How Spider-Man Led to the Invention of the Prisoner Ankle Monitor
>Will California law allow this 2,300-unit project, despite local NIMBYs? (If you read my recent column on Atherton, you might guess the answer.)
How Spock Became a Sex Symbol (Bloomberg column I wrote when Leonard Nimoy died)
Interview with me about The Power of Glamour (old but good!)
Got a positive vision of the future? Enter my contest, described at the bottom of this post.
]]>I recently spent time visiting my aging parents—my father will be 88 in November and my mother is 86—so I have elder care on the brain. My parents have a diversified portfolio of children to help them out. I, on the other hand, have none, which makes futuristic elder care predictions all the more salient.
What happens when the population of old people gets too big for the supply of paid and unpaid caregivers? Raising wages is the obvious answer but, as I discussed in a March 2020 Bloomberg column, the economics are terrible.1
To save you a click, I’m going to repeat the whole thing, with the critical facts highlighted, before I get to my new point:
They are the most numerous and lowest-paid U.S. health-care workers: the 4.5 million caregivers who assist elderly and disabled people with such daily activities as bathing, dressing, feeding and toileting. Compared to other medical workers, they have the most frequent and personal contact with the elderly. That puts them on the front lines of the coronavirus pandemic.
Known in the business as direct-care workers, these paraprofessionals include certified nursing assistants, who work in nursing homes, and home health aides and personal-care assistants, who often travel between multiple clients. Their median wage is $12.27 an hour, with home care workers making the least and nursing assistants the most. As the pandemic spreads, these workers are especially vulnerable and increasingly difficult to replace.
Don’t they deserve more money? The National Domestic Workers Alliance certainly thinks so:
Home care workers take action when health crises hit to protect the most vulnerable, and $11/hour isn’t fair pay. We need to invest in care workers — key first responders in this crisis. https://t.co/j9Mnmgh3FY
— Domestic Workers (@domesticworkers) March 14, 2020
But the pay isn’t low because the people who hire caregivers are greedy and mean-spirited. Neither is it because the work is easy or unimportant. It’s a much tougher problem. Caregiving is vital, but so labor-intensive that at higher wages, hardly anyone could afford it.
Consider some basic math. There are 8,760 hours in a year (8,784 this leap year). At $11 an hour, that’s $96,360. At $12.27, it’s $107,485. At $15 an hour, it’s $131,100.2
And that’s before the client pays for room, board, and other medical care. About 16% of caregiver payments come from family budgets, while private insurance covers only 11%. The rest comes from government programs, primarily Medicaid.
Not everybody requires 24/7 care, of course, but many people do. “Need” is as much a matter of what people can afford as it is an objective criterion. For families deciding how to spend dwindling resources, it can be a matter of supplementing paid with unpaid labor, which can require cutting back on a family member’s own work hours. For state legislatures allocating the Medicaid money that covers 52 percent of long-term care, it means tradeoffs between elder care and hospital reimbursements or maternity care, not to mention schools and highways.
Direct-care workers are in short supply, with skyrocketing turnover rates. In 2018, the latest year for which data is available, the turnover rate among home health aides hit a new high of 82 percent, according to the annual survey by Home Care Pulse. The agencies and institutions that employ these workers aren’t just competing with each other. They’re competing with businesses such as Amazon.com that have been raising entry-level wages.
“Turnover was so severe in 2018 that more than half of the participants had to turn away new clients because they didn’t have enough caregivers,” reported the trade publication Home Health Care News. Tight Medicaid reimbursement limits make it hard to raise pay or expand service. Higher reimbursements would permit higher pay but, again, the money would have to come from somewhere else.
We’ve seen this dilemma before: absolutely essential work that takes so long that wages remain low even as it consumes a high proportion of total expenses. Pre-industrial spinners worked for incredibly low wages, yet their pay was often the biggest expense in making cloth. And they were always in demand. “The spinners never stand still for want of work; they always have it if they please; but weavers sometimes are idle for want of yarn,” wrote the agronomist and travel author Arthur Young, who toured northern England in 1768.
Before the Industrial Revolution, Indian hand-spinners, the world’s best, took about 100 hours to produce enough cotton thread to weave the fabric for a modern pair of jeans — not including the time cleaning and preparing the fiber beforehand. Spinning the equivalent amount of wool on a European spinning wheel took about 110 hours. At the low modern wage of $11 an hour just the thread in a pair of trousers would have cost well over $1,000, not including the time spent dyeing, weaving or sewing. (For sources and an in-depth discussion, see my forthcoming book, "The Fabric of Civilization: How Textiles Made the World.") Only by finding mechanical ways to get much more thread per hour did people finally make cloth abundant, leading to the takeoff in worldwide living standards that economic historian Deirdre McCloskey calls the Great Enrichment.
Boosting productivity, and the wages it supports, is tougher for in-person services like caregiving. In his 2017 book, “Who Will Care for Us?” economist Paul Osterman advocates giving caregivers more training and allowing them to provide simple medical services like administering medications, thereby reducing the need for more expensive nurses to provide frequent care. Many people get into the field because of an interest in health care only to find themselves ignored by their medical colleagues, treated as little more than baby sitters and stymied when they seek to learn more.
Current regulations, Osterman notes, can lead to such absurd situations as caregivers guiding the hands of dementia patients to “self-administer" eye drops, skirting rules against aides doing that job. But nurses, who are well organized and politically influential, fight like crazy against any incursion on their territory.
To boost productivity more significantly, potentially improving both care and wages, start-ups are experimenting with artificial intelligence. An intriguing example is Cherry Home, which markets an unobtrusive monitoring system that distinguishes normal behavior patterns from abnormal ones, including falls, restless sleep or signs of confusion. When something looks off, the system alerts a monitoring center, which contacts caregivers, family members or emergency services as needed. The system has a privacy mode that displays stick figures rather than images of people, and it can communicate with someone in distress without requiring them to press a button. In theory, such systems could allow individuals to stay in their homes without having aides or family members present all the time.
By calling attention to the important work of aides in eldercare, the coronavirus offers an opportunity for a conversation about how to increase the respect accorded their work within the health-care system, how to improve communication between the people who spend the most time with the frail elderly and other medical professionals, and how to build their skills and widen their responsibilities. But as long as it takes thousands of hours a year to care for a single person, wages can’t go up much.
In the pre-industrial era, everyone knew that spinners were important. Before smokestacks, a spinning woman was the iconic representation of “industry.” But to raise wages, you need more than respect and goodwill. You need new technology.
Anyone looking for A.I. applications ought to be thinking about how to enhance elder care. Cherry Home’s system demonstrates that new technology may not mean anything as complicated and intrusive as a humanoid robot.
Robots for elder care tend to be pitched as substitutes for friends or pets or human caregivers. They dance! They tell jokes! But as the comments on this YouTube video suggest, people tend to find the cheery forced companionship of robot carers more than a little suspect.
Anthropomorphizing robot caregivers misses one of their biggest advantage: their impersonal nature. The typical approach reflects the widespread assumption that old people would prefer human aides. The mental model of the elder is my late mother-in-law, who treated her caregivers as new best friends. Who wants to be cared for by an inanimate object rather than a person?
The same kind of person who, all else being equal, prefers the self-checkout to the cashier or the ATM to the bank teller. For all the chatter about technology and “privacy,” nobody seems to be thinking about what it’s like to have a strange person come into your house, hang around for hours, and see you in your most intimate and vulnerable moments. The privacy that matters in everyday life has nothing to do with databases.
My fiercely independent parents have no interest in a human caregiver, even though they could use some help. And I understand why. Personally, I’d prefer a robot that acts like a helpful appliance.
I'll be giving a Zoom lecture on The Fabric of Civilization Saturday. Sign up at Lunatic Fringe Yarns. I’m adding a live demo explaining how spinning works, since even experienced spinners often don’t understand what’s going on. Let’s hope it works.
The Prepared is an excellent newsletter about “engineering, manufacturing, and infrastructure.” Their book club recently featured The Fabric of Civilization, and I answered members’ questions on Zoom. Hilary Predko did a fantastic job editing the conversation into an interesting Q&A, complete with some nice visuals.
Two fun food writers: Tamar Haspel grows her own and Howard Miller (whose bio is a story in itself) plumbs the weirdness of American food history, although his latest post is on the movie Elvis.
On Substack, there are no editors telling you what you can write about. Or, as you may have noticed, when that free post has to be finished.
In my August 6 post, I announced a contest inspired by thoughts from fellow dynamist Substackers Jim Pethokoukis and Anton Howe:
So here’s are the challenges. You can pick one or try any combination.
I’ll publish a selection of the best here (you’ll retain rights, of course) as I receive them and will accept entries through September 30. I’ll then award the top two in each category a collection of what Jim would call “Up Wing” books. The judging process will depend on how many entries are received, and I reserve the right to award fewer than six prizes. Email them to me at vp@vpostrel.com.
Full background, including the model paragraph, at the original post.
]]>My latest Bloomberg Opinion column looks at some unfortunate, but largely futile, NIMBYism in Silicon Valley’s (and America’s) most expensive town. Here’s the opening":
Venture capitalist Marc Andreessen got caught last week engaging in housing hypocrisy. The author of a 2020 manifesto called “A Time to Build," Andreessen is a vocal opponent of NIMBYism. Yet when it came to his own town of Atherton, California, Andreessen signed a public comment opposing a plan to add 137 units of multifamily housing by rezoning nine lots. (The comment, written in the first-person singular and a style unlike Andreessen’s, seems to have been composed by his wife.)
The incident proves more than that. It demonstrates that California’s state-level housing reforms are working — not as fast as they ideally would, but working nonetheless
To see what’s going on, read the full thing on Bloomberg Opinion. If you can’t get past the paywall, you can read a version without links at the WaPo, courtesy of my subscription.
Can Liberalism Make Peace Between the Future and Its Enemies?
Aaron Ross Powell, who hosts The Unpopulist’s podcast, interviewed me about my 1998 book The Future and Its Enemies, which he said “looks more and more prescient with every passing day.” Aaron asked excellent, thought-provoking questions and I was having an articulate day. It’s a wide-ranging discussion and I highly recommend listening to the podcast or reading the transcript. Here’s a selection:
Aaron: I was in high school in the 90s. Thinking about gay marriage—you mentioned gay marriage—how dramatic the change on acceptance of gay relationships and gay marriage has been: When I was in high school, Ellen coming out on her sitcom was, like, We're going to have a gay character on television! This was national news; everyone was talking about it. Whereas now, 30 years later, it's just like, so what, there's a gay character.
It happens very quickly, and this makes me think how much of this is about—and going back to the rules, too—ambiguity versus clarity; that people want to know how things are, and how they're going to be. And a lot of rapid change is not constant. It's not uniform. It is experimentation and competing views and figuring out which is the right one, or which is the acceptable one.
All of that messiness means that things are ambiguous, and that what we want is clarity. We want to know, okay, this is the rule that I'm going to have to follow tomorrow. This is what's going to be acceptable. I'm not going to get called out for this. I'm willing to change, but I want to know what it's going to be. That dynamism is inherently ambiguous.
Virginia: Well, I think that is part of it. I think people do want to be able to make their own plans and structure their own lives in a way that it is going to work for them. I would argue that you're better off in a world where people aren't constantly making new rules, from their plans, to run your plans. That's one of the big Dynamist ideas.
But you were talking about people wanting clarity. One of the things that I've written about over the years is clothing sizes and problems of fit. Bear with me; this is relevant. People tend to think that it would be better if there were specific clothing sizes—that if you knew that every size eight dress was for a 35-inch bust and a 28-inch waist (I'm making these up) and 40-inch hips, or something like that, that would be great, because everything would be the same. You would know exactly what you were getting.
It would actually be terrible. In the ‘40s, the catalog companies actually went to the government and said, Could you please establish some standard sizes? And they did. But almost as soon as they were established, different brands started not complying with them, because it wasn't required; it wasn't a regulation.
The reason is that people's bodies come in different proportions—even two people who are the same height and weight. One will have longer legs, one will have shorter arms, one will have a bigger waist, the other will have bigger hips, et cetera. What happens is that brands develop their own fit models and their own sizes. The lack of clarity actually makes it more possible for people to find what fits. I think that is an analogy to one aspect of dynamism—that is, the fact that there isn't a single model that everyone must comply with makes it more likely that people can structure their own lives in meaningful ways.
Now that said, this goes back to this issue of nested rules. Hammering down on people because they express views that were perfectly normal 10 minutes ago, or worse yet, because they use a term in a nonpejorative way (they think), and suddenly, it's turned out that it's now pejorative: This is not good. This is a kind of treating as fundamental rules things that should be flexible and adjustable and tolerant. There is this idea of tolerance when we talk about tolerance as a liberal value, a liberal virtue, but there's also mechanical tolerances. I think a society needs that kind of tolerance as well. That allows for a certain amount of differentiation and pliability; that allows things to work, and it allows people not to be constantly punished. Zero tolerance is a bad idea. Anytime people are having zero tolerance, you're almost always going to be running into trouble.
Read or listen to the whole thing here. Buy The Future and Its Enemies on Amazon here.
A Substack Milestone and a Contest Reminder
I’ve been writing this newsletter for four months and have just crossed the 2,000 subscriber mark. Please spread the word.
In last week’s post, I announced a contest inspired by thoughts from fellow dynamist Substackers Jim Pethokoukis and Anton Howe:
So here’s are the challenges. You can pick one or try any combination.
I’ll publish a selection of the best here (you’ll retain rights, of course) as I receive them and will accept entries through September 30. I’ll then award the top two in each category a collection of what Jim would call “Up Wing” books. The judging process will depend on how many entries are received, and I reserve the right to award fewer than six prizes. Email them to me at vp@vpostrel.com.
Full background at the original post. I’ve been asked about word limits on the written entries. The inspiration paragraph is about 250 words long. I suspect 250-500 words is the sweet spot, but I don’t want to put limits on readers’ imagination. The only warning is that if you go over 1,000 words you probably won’t get the judges’ full attention unless the writing is riveting.
If you’d like to nominate or donate books as prizes, please email me.
]]>For Bloomberg Opinion, I interviewed Nolan Gray, author of the new book Arbitrary Lines, which advocates abolishing zoning (but not land-use planning) in the U.S. Here are a couple of excerpts:
VP: Is zoning a specifically US phenomenon?
NG: Most developed countries have something resembling zoning. They will say industrial building is not allowed in certain quarters of the city, or certain portions of the metropolitan area are going to be reserved for agriculture. But US zoning is unique in at least two ways. The first is single-family zoning. No other zoning system in the developed world, to my knowledge, demarcates specific areas only for single-family housing.
The second way that US zoning is unique is the complete orientation around the car. It’s often illegal to build an apartment building without a parking garage, or it’s illegal to build a commercial strip without a large parking lot….
VP: You write about the origins of zoning in both New York and Berkeley, California. Can you explain what drove it?
NG: Both reflect the “Baptists and bootleggers” coalition that gets us zoning. The “Baptists and bootleggers” idea is that political coalitions will normally have someone who’s cynically invested in the policy — the bootlegger who supports prohibition because he can make money off of it — and then the Baptist who provides the political movement with moral cover.
Start with the “Baptists.” During the Progressive Era there was this notion that cities and markets are too scary and chaotic. Wouldn’t it be great if we got all the smartest people in the room to come up with a big master plan for what’s going to be allowed on every single lot in our city for the next 50 years? Most modern people look back and think that’s a little crazy. But that was the ethos.
The bootleggers were the landlords who — in the Manhattan context — think, “Way too much office supply is being built in lower Manhattan and it’s lowering the value of my assets.” In the Berkeley case, if you read the zoning promotional materials, one paragraph will say, “We need to adopt zoning so we can keep industry out of residential neighborhoods.” With modern eyes, you read that and think, Yeah, that makes sense. You don’t want an oil refinery next to your house. But then the next paragraph explains what industries they’re concerned about. It’s Chinese laundries. Or dance halls that are bringing African Americans into the neighborhood.
In New York City, shopkeepers on Fifth Avenue were worried about loft manufacturing moving closer to the shopping district. Again, you read that with modern eyes and think, OK, factories. There must have been smoke or noise or vibrations. But the shopkeepers’ specific concern was that poor Jewish factory girls are coming to window-shop along the corridor, and they’re scaring off our elite clientele. Zoning is much more of a social project than it is a good-government process.
VP: You repeatedly make the point that zoning “cannot build a building. It can only ever stop something from being built.” Why is that an important distinction?
NG: When Minneapolis abolished single-family zoning recently, some of the media coverage said that it was banning new single-family homes. But that’s not what they did. They got rid of single-family zoning, which was just a prohibition on apartments. They were getting rid of a prohibition.
In L.A., there are a lot of conversations about getting rid of minimum parking requirements. And people say, “Come on, you’ve got to have somewhere to park.” But getting rid of minimum parking requirements isn’t saying to developers that you’re not allowed to build any more parking. It’s saying that we’re not going to force you to build any parking. We’re not going to mandate things that you wouldn’t otherwise have done. It’s a really important difference.
You can read an ungated version here, courtesy of my WaPo subscription. Our conversation was much longer than what I was able to publish, and, of course, the book goes into further depth. The discussion of Houston, the great American unzoned city, is particularly interesting.
Speaking of interviews, I highly recommend this conversation between Persuasion founder Yascha Mounk and Eboo Patal, the founder of Interfaith America and the author of We Need to Build: Field Notes For Diverse Democracy. It’s excellent throughout. Patel has a particular appreciation of the success of America’s dynamist approach to religion and how it plays out in the constant evolution of civic associations. I also appreciated the early discussion of the appeal and limitations of the “critique” approach Patal embraced as a college student. “I thought sophistication meant only telling the most negative story possible,” could be the slogan not only of the academic left but of many libertarians and conservatives.
I’ve spent the week interviewing people at synthetic biology startups. I’ve eaten salmon sushi grown from a few cells, with no fish killed and no impurities (parasites, heavy metals, microplastics, whatever). I’ve eaten cream cheese made from whey protein excreted by fungi. You can read more later this year, in a longer article elaborating on the themes in this column from last year. As Greg Benford argued in this 1995 Reason article, ours is the Biological Century: “Beyond 2000, the principal social, moral, and economic issues will probably spring from biology's metaphors and approach, and from its cornucopia of technology. Bio-thinking will inform our world and shape our vision of ourselves.”
The biological advances proceed not just from greater biological understanding, however, but also from advances in computing power and now increasingly in machine learning. Last week brought the news that protein folding is no longer a mystery. The AI company DeepMind, owned by Alphabet (Google’s parent company), announced:
In partnership with EMBL’s European Bioinformatics Institute (EMBL-EBI), we’re now releasing predicted structures for nearly all catalogued proteins known to science, which will expand the AlphaFold DB by over 200x - from nearly 1 million structures to over 200 million structures - with the potential to dramatically increase our understanding of biology.
What will come of this information remains to be seen, but it promises to be big, with implications for medicine, agriculture, and more. My Bloomberg Opinion colleague Lisa Jarvis, a former skeptic, wrote (ungated version here):
Since the early 1990s, scientists have been trying to train computers to predict a protein’s structure based on its genetic sequence. AlphaFold had the first taste of success in 2020, when it correctly predicted the structures of a handful of proteins. The next year, DeepMind put on its server about 365,000 proteins.
Now, it’s put the entire universe of proteins up for grabs — in animals, plants, bacteria, fungi and other living things. All 200 million of them.
Much as the gene-editing tool Crispr revolutionized the study of human disease and the design of drugs to target genetic errors, AlphaFold’s feat is fundamentally changing the way new medicines can be invented.
“Anybody who could have thought that machine learning was not yet relevant for drug hunting surely must feel different,” said Jay Bradner, president of the Novartis Institutes for BioMedical Research, the pharma company’s research arm. “I'm on it more than Spotify.”
Count me as one of the former skeptics. I hadn’t discounted the possibility that AI might have an impact on the drug industry, but I was weary of the many biotech firms hyping often ill-defined machine-learning capabilities. Companies often claimed that they could use AI to invent a new drug without acknowledging that the starting point — a protein structure — still needed to be worked out by a human. And so far, people have had to first invent drugs for the computer to improve upon them.
Producing the full compendium of proteins is something entirely different — and outside the usual hype cycle. It’s little wonder that executives at biotech and pharma companies are widely adopting AlphaFold’s revelations.
For more background on AlphaFold’s approach and the potential, see this Technology Review article from February.
On a more disturbing note, this AI-written letter to Glenn Loury fooled me completely. And I dread having to be on the lookout for AI-written student papers. (If you don’t want to learn, please don’t take my course!)
I am a huge fan of Jim Pethokoukis and his Substack newsletter, “Faster, Please!” But I’ve spent too much time thinking about glamour to share his enthusiasm for 20th-century visual depictions of the glorious future. They leave out too much—glamour always does!—and those omissions have had some perverse consequences, particularly in urban planning.1 I don’t want to live in the world of The Jetsons for the same reasons I don’t want to live in 1965. Plus there’s more to progress than faster transportation and robot maids. Surely our images can do better, including more human-scale views rather than grand visions that abstract away individual experience.
Meanwhile over at another Substack newsletter I enjoy, Anton Howes writes about Victorian confidence, quoting an 1859 document arguing for a successor to the Great Exhibition of 1851 (known for the Crystal Palace). It describes the previous eight years:
Looking back for that period in England, we find that several new arts and industries have arisen, and old ones have been extended. Scarcely more than ten years have passed since the submarine telegraphs were unknown; the screw propeller applied to our steam-vessels; the glass-duty removed; the great improvements and advancement in the trade and products of the Staffordshire potteries effected; the manufacture of bricks left free to take such form as may be required; the excise duty on soap got rid of; photography and chromatic printing introduced and perfected as arts; gutta percha and many vegetable oils from our Colonies, such as the Bassia Latifolia and the Cahoun Palm, introduced as new raw materials in commerce; whilst the declared value of our exported manufactures has risen from £65,756,000 in 1851 to £122,155,000 in 1857. Add to the above the fact, that within ten years the resources of our Colonies have been largely developed, and the commercial world has acquired three additional emporia: two on the shores of the Pacific, and one on the great American Lakes, viz., San Francisco, Melbourne, and Chicago, none of which are even named in the edition of Mr M’Culloch’s Dictionary of Geography, published in 1849; also that China and Japan have now been opened to trade with England; and we cannot but come to the conclusion that ten years is a period fully sufficient to justify the Society of Arts in proposing to hold an Exhibition in 1861.
Anton comments: “The contrast to today is marked. It is striking that so many intellectuals — particularly in the UK, but also in the US and elsewhere — believe economic and technological stagnation to now be an unavoidable fact of life. Although I don’t subscribe to the view that we’ve been seeing stagnation, I do think we’re falling far short of our potential. It’s worth imagining what kind of Victorian-style paragraph we can write about our last eight years, and what we would hope to write about the next.”
So here’s are the challenges. You can pick one or try any combination.
I’ll publish a selection of the best here (you’ll retain rights, of course) as I receive them and will accept entries through September 30. I’ll then award the top two in each category a collection of what Jim would call “Up Wing” books.2 The judging process will depend on how many entries are received, and I reserve the right to award fewer than six prizes. Email them to me at vp@vpostrel.com.
My latest Bloomberg Opinion column is explained well in an excellent subhead (contrary to popular assumptions, writers don’t craft the headlines or subheads that appear on their work): “Packaging less stuff for the same price doesn’t fool consumers or economists. But diminishing quality imposes equally maddening extra costs that are almost impossible to measure.” Excerpt:
If a 16-ounce box contracts to 14 ounces and the price stays the same, I asked Bureau of Labor Statistics economist Jonathan Church, how is that recorded? “Price increase,” he said quickly. You just divide the price by 14 instead of 16 and get the price per ounce. Correcting for shrinkflation is straightforward.
New service charges for things that used to be included in the price, from rice at a Thai restaurant to delivery of topsoil, also rarely sneak past the inflation tallies any more than they fool consumers.
But a stealthier shrinkflation is plaguing today’s economy: declines in quality rather than quantity. Often intangible, the lost value is difficult to capture in price indexes.
Faced with labor shortages, for example, many hotels have eliminated daily housekeeping. For the same room price, guests get less service. It’s not conceptually different from shrinking a bag of potato chips. But would the consumer price index pick up the change?
Probably not, Church said.
This phenomenon, which Doug Johnson aptly dubbed “disqualiflation” in a Facebook comment, is widespread. One example is the four-hour airport security line I chronicled in an earlier Substack post. Another is the barely trained newbie who screws up your sandwich order—a far more common experience today than four years ago. It’s the flip side of a phenomenon I wrote about in The Substance of Style and in economics columns in the early 2000s (see here and here).
During the 2000s and 2010s, inflation was probably overstated because of unmeasured quality increases. Now there’s the opposite phenomenon. Quality reductions have become so pervasive that even today’s scary inflation numbers are almost certainly understated.
If you can read the column at Bloomberg, please do. But if you run into the paywall, which allows a few articles a month, you can use this link to the WaPo version, which doesn’t have links.
You may have heard that the “chemical imbalance” theory of depression has been disproved. A typical summary is this one, from a post by a Facebook friend who shall remain nameless:
The pseudoscientific idea that “depression” is a “chemical imbalance in the brain” has been among the most pernicious for the happiness of humans, but among the most convenient for big pharma. “You don't need to rethink your life. Just take this pill.” The same logic behind drug addiction.
Here’s a popularization by the authors of the scientific paper. The study is not about whether the general idea of a chemical imbalance is correct. Nor is it about whether antidepressants work. It is specifically about the theory that “depression is a result of abnormally low or inactive serotonin.” Saying depression isn’t caused by abnormally low serotonin is a long way from saying it’s just the world telling you to rethink your life.
I do not need to rethink my life. I have a wonderful husband, meaningful work, financial security, generally good health. I had a loving family and a happy childhood. But from adolescence onward, I have suffered from bouts of depression. “But I can’t be depressed!” I long thought. I had a happy childhood!
But when this Zoloft commercial originally ran in 2001, I completely identified with the little blob—only I was much more miserable and worthless feeling. And I had already rethought my life. I had quit my job as editor of Reason, driven at least in part by a desire to stop feeling like a failure, and embarked on a career as an independent writer. My career was going well, but my mood was as black as ever.
I did eventually talk with my internist, who diagnosed depression. When she asked whether I ever felt suicidal, I said no, never, but I understand why other people do. She prescribed fluoxetine, aka Prozac, because it was available as a cheap generic. It made me less passionate and gave me weird dreams but allowed me get a rational grip on my depressive cycles. After a number of years, I went off the drug. When my depression returned a few years ago, thankfully not in as intense a form, my doctor prescribed sertraline (aka Zoloft), which is much, much better. It simply makes me feel normal, without the numbing effects of fluoxetine.
Depression feels as foreign and irresistible as the flu. If you think it is mere sadness, you don’t know what you’re talking about. We may not understand why antidepressive medication works, which makes it like many other medications, but I have to wonder at the urge to tell people who suffer from this crippling disease that they should just get their acts together.
Of course, I’m just a single data point. If you want to read some expert reactions, here’s a collection of short responses to the new findings. A couple of examples, from the same institution, University College London, as the review’s co-authors:
Dr Michael Bloomfield, Consultant Psychiatrist and UKRI Principal Clinical Research Fellow, Translational Psychiatry Research Group Head, UCL, said:
“The hypothesis that depression was caused by a chemical imbalance in serotonin was a really important step forward in the middle of the 20th century. Since then, there is a huge of amount of research which tells us that the brain’s serotonin systems plays very important roles in how our brains process different emotions.
“The findings from this umbrella review are really unsurprising. Depression has lots of different symptoms and I don’t think I’ve met any serious scientists or psychiatrists who think that all causes of depression are caused by a simple chemical imbalance in serotonin. What remains possible is that for some people with certain types of depression, that changes in the serotonin system may be contributing to their symptoms. The problem with this review is that it isn’t able to answer that question because it has lumped together depression as if it is a single disorder, which from a biological perspective does not make any sense.
“Many of us know that taking paracetamol [acetaminophen] can be helpful for headaches and I don’t think anyone believes that headaches are caused by not enough paracetamol in the brain. The same logic applies to depression and medicines used to treat depression. There is consistent evidence that antidepressant medicines can be helpful in the treatment of depression and can be life-saving. Antidepressant medicines are one type of treatment alongside other types of treatment like psychotherapy (talking therapy). Patients must have access to evidence-based treatments for depression and anyone taking any treatment for depression who is contemplating stopping treatment should discuss this with their doctor first.”
Prof David Curtis, Honorary Professor, UCL Genetics Institute, said:
“This paper does not present any new findings but just reports results which have been published elsewhere and it is certainly not news that depression is not caused by “low serotonin levels”. The notion of depression being due to a “chemical imbalance” is outmoded, and the Royal College of Psychiatrists wrote that this was an over-simplification in a position statement published in 2019. Nor is it the case that SSRI antidepressants increase serotonin levels. Their immediate action is to alter the balance between serotonin concentrations inside and outside neurons but their antidepressant effect is likely due to more complex changes in neuronal functioning which occur later as a consequence of this. It is very clear that people suffering from depressive illness do have some abnormality of brain function, even if we do not yet know what this is, and that antidepressants are effective treatments for severe depression whereas interventions such as exercise and mindfulness are not. It is important that people with severe depression are not discouraged from receiving appropriate treatments, which can make a huge difference to them and those around them.”
And here’s Freddie deBoer, who knows serious mental illness all too well.
Show, don’t tell: One of the small, pervasive changes that makes news stories seem both patronizing and politicized is the increasingly common practice of inserting judgmental adjectives into otherwise descriptive sentences. Telling readers that a statement is “false” while repeating it may be justified, if intrusive, but in other cases it’s an unnecessary tic.
Gone is the assumption that readers are intelligent people who can draw their own conclusions from a compelling presentation of the facts. Journalists now seem to live in fear that their readers won’t think correctly. Take this sentence from interesting article on the evolution of American Sign Language: “For a portion of the 20th century, many schools for the deaf were more inclined to try to teach their students spoken English, rather than ASL, based on harmful beliefs that signing was inferior to spoken language.” (Emphasis added.)
If you read the article, you are highly unlikely to come to the conclusion that signing is anything less than a full-blown language, not inferior to spoken English. But the article never gives evidence that this incorrect 20th-century belief was harmful. It doesn’t discuss the pluses and minuses of signing, or why one belief was succeeded by another. That’s a different story. In the context of this story, the adjective is unnecessary, distracting, and insulting to the reader’s intelligence.
In a word, chintz: This article from House and Garden (UK) examines “the debt British interior design owes India” and quotes The Fabric of Civilization, which the magazine reviewed earlier this year:
Postrel’s The Fabric of Civilization is a relatively academic analysis made accessible to casual readers. It’s full of amazing anecdotes, too: you will learn, for example, that a 100sqm sail for a Viking ship would take 60 miles of yarn to weave, and took longer to make than the ship itself. Postrel also visits modern textile-production facilities and weaving schools, to understand the technology behind the huge uptick in global availability of fabric.
And a favor to ask of my British readers: Please review The Fabric of Civilization on Amazon UK, so that the folks mad that the illustrations aren’t in color don’t predominate!
Just for fun: