Smile when you say that

A post in which I talk about a podcast, but also about how you can hear a smile and why that scares me.

Thanks to the If Books Could Kill podcast and its co-host, Peter Shamshiri, I started listening to 5-4 — “a podcast about how much the Supreme Court sucks” which Peter also co-hosts. It’s good, infuriating, and informative. But handing you a new podcast to listen to is not why I’ve asked you here.

There is a promotion at the half-way point for a newsletter — Balls and Strikes — that another co-host, Michael Morbius, narrates. They seem to run it each episode and I’ve started noticing something. Actually, I’ve noticed that I’ve noticed something. It’s a bit meta.

I’m getting there.

At one point as he speaks I could tell that he starts smiling. The change is clear but undescribable. I don’t know why my brain has picked up on this. Less so, do I know why he’s smiling. So I went to the Internet, as I do, to find out why my brain does what it does.

My first stop was this article in Discover Magazine that showcases a study suggesting that if you can sense a smile in a voice you’re hearing, and not someone you can see, you tend to smile back. The article and the study it links to–well done, consumer science journalism–discuss the lack of research on what constitutes this auditory smile. Checking the paper’s sources, I ended up here: “The vocal communication of different types of smile” in Speech Communication. The study is from 2008 and I’m not sure if I’m going to see what builds upon this research. But I was still curious to see who else out there was wondering, “did I just hear you smile?”

Then I got here:

“Smiling voices maintain [increased trust] even in the face of behavioral evidence of untrustworthiness.” (1)

…and here:

“We present an experiment in which participants played a trust game with a virtual agent that expressed emotion through its voice, in a manner congruent or incongruent with its behavior.” (1)

…and here:

“Using an investment game paradigm, we found that positive vocal emotional expression – smiling voice – increases participants’ implicit trust attributions to virtual agents, compared with when agents speak with an emotionally neutral voice. As previously observed, the monetary returns of the agent also affected implicit trust, so that participants invested more money in the agent that was behaving generously.”(1)

And this is the point where I’ve saved the citation in Paperpile, sat back with my arms folded and leaned over to look down into the murky depths of this rabbit hole. I still don’t know what stimuli my brain is picking up that translates into “smile” after Michael says “Supreme Court sucks”, but I can pick up the danger of being able to simulate this in such a way that creates trust between yourself and stranger on the phone.

This is more than just Cash Green’s white voice in Sorry to Bother You, this is the “right voice,” the one that flicks an unknown switch in your head and you picture a reassuring smile. The “right voice” is built upon the research that pull the secrets out of our brains and tools them for algorithmic benefit. The “right voice” won’t just relieve people of their hard-earned money, it will lead them astray, down paths not yet cut.

What do I do? This digression has made me thoughtful. Sigh.


(1) Torre, Ilaria, et al. “If Your Device Could Smile: People Trust Happy-Sounding Artificial Agents More.” Computers in Human Behavior, vol. 105, Apr. 2020, p. 106215. https://doi.org/10.1016/j.chb.2019.106215

What about an NFT of a tulip?

“It’s a Ponzi scheme. When there was tulip mania, at least when you lost all your money, you still had a tulip.”

Dennis Kelleher

I watch cryptocurrency drama from the nosebleed seats. I have some shallow understanding of the system and, I’m not ashamed to say, I rely on my students to fill in some details for me if I’m curious and they’re willing. If you keep hearing about FTX and wondering what’s going on, this piece in The Atlantic by Annie Lowrey will give you an idea of the most recent meltdown.

Everything in content moderation…

Content moderation is what Twitter makes — it is the thing that defines the user experience. It’s what YouTube makes, it’s what Instagram makes, it’s what TikTok makes. They all try to incentivize good stuff, disincentivize bad stuff, and delete the really bad stuff.

From Nilay Patel’s “Welcome to hell, Elon” at The Verge

Social Media Exile

I don’t think social media is really all that healthy.

I’ve been positive in dealing my own rhetoric, especially in class. I’ve tried to discuss its functions conceptually, that there are benefits if you curate well and hypervigilant, but the labor costs outweigh the benefit.

I have been mindful of how I feel when I’m on a platform. Twitter is now where I feel the worst; Facebook is pretty neutral as I’ve culled my friends list down considerably. Instagram and TikTok are still relatively positive, if not actively negative. But I want to re-evaluate how *I* want to use platforms, what *I* want to say.

Right now, dunno, ya know?

This is going to be tricky as I read in Digital Composition and Rhetoric. There are more media out there than just social media channels, but like the sewage systems of most metropolitan areas, everything runs into them. Those channels are drivers of discourse now. We build cites based on how fast our shit flows underneath.

(Perhaps that’s not the best metaphor – but you see what I mean.)
I will be thinking a lot about how we pull back, as a society. How we maintain important connections, but not add to or get inundated by the garbage. Maybe it’s time to just go full in on a personal web-site. Keep all of my postings there and worry less about enGagEmeNT and more about cultivating a space for me.