Tag: technology

  • Link: For The Love of God, Make Your Own Website

    For The Love of God, Make Your Own Website
    Gita Jackson | Aftermath

    “Unfortunately, this is what all of the internet is right now: social media, owned by large corporations that make changes to them to limit or suppress your speech, in order to make themselves more attractive to advertisers or just pursue their owners’ ends.”

  • I made the mistake of adding tech feeds to my RSS …

    There was a time, back in the 00s or so, when I kept up to date on a lot of tech news. I spent the time trying to follow trends in web development and mobile technology and apps. It was important to my work, but part of my overall interest in how the Internet was being integrated into our daily lives.

    Once I moved away from tech focused work and into education, I let those news feeds go. I purged myself of latest developments and relied on larger, mainstream news outlets to inform me about the technology that would impact my daily life. Yet, as I started teaching first-year composition, I realized (we all realized) that being up-to-date on the technology our students were using (not always by choice either) would make our discussions surrounding critical thinking that much more relevant. Trust me, when ChatGPT burst on to the scene, I suggested we lean into it, teaching it as a new tool that students can use in their ever increasing arsenal of technological apps that can help them… wait. Help them what exactly? What was I supposed to be teaching them? More on that in a moment.

    As I’ve reinstated my tech RSS feeds, I find myself getting more and more suspicious about the proliferation of AI. While I’ve not been totally receptive to the computerization of our automobiles, nor the subscription model of everything, I still want to be able to glean the direction personal technology is headed, and, if Microsoft’s recent announcements are any indication, the future is AI.

    If you follow my links of interest at all, you can see where this is going?

    Who wants this? Why are we being told that we want this? Every time I read about another AI-enhanced laptop or an app that’s I’ve been using suddenly wants me to incorporate their AI model into my workflow, I get frustrated.

    If I’m not building up the critical skills to summarize an article (and most public-facing articles are woefully short and simplistic already), what am I doing with the time AI is saving?

    What’s the point of learning discernment and curation if I never skip over Google’s AI-created search summaries at the top of its results page? (Shouldn’t have made Google suck, yeah?)

    How am I supposed to teach critical thinking skills if all the little ways in which we think every day are being eaten up by language learning models that have dubious bias built in and don’t have an understanding of a ham sandwich outside the fact that sometimes it’s mentioned next to mustard?

    More and more I feel like we’re heading into a direction where we will teach two ways of existing cognitively in our technological future: 1) a way to prompt AI to yield results that save us from doing labor or gaining mastery; and 2) a way to avoid the AI altogether and create bare bones text and art in a way that hides it from the upcoming singular cloud. Both feel exhausting.

    Meanwhile, a bunch of narcissistic shitheads are at the controls of the future.

  • Don’t talk about tech and NOT credit authors

    This short section of The Atlantic Daily has the following sentence:

    “The term metaverse was coined in a 1992 science-fiction novel titled Snow Crash. (The book also helped popularize the term avatar, to refer to digital selves.)”

    The book, not Neal Stephenson, the person who happened to author the book, but the book. If we want to talk about the future and technology and don’t address the erasure of the humans behind the technology (and the Book inspirations of its ideas) then you’re part of the problem.

  • Smile when you say that

    Thanks to the If Books Could Kill podcast and its co-host, Peter Shamshiri, I started listening to 5-4 — “a podcast about how much the Supreme Court sucks” which Peter also co-hosts. It’s good, infuriating, and informative. But handing you a new podcast to listen to is not why I’ve asked you here.

    There is a promotion at the half-way point for a newsletter — Balls and Strikes — that another co-host, Michael Morbius, narrates. They seem to run it each episode and I’ve started noticing something. Actually, I’ve noticed that I’ve noticed something. It’s a bit meta.

    I’m getting there.

    At one point as he speaks I could tell that he starts smiling. The change is clear but undescribable. I don’t know why my brain has picked up on this. Less so, do I know why he’s smiling. So I went to the Internet, as I do, to find out why my brain does what it does.

    My first stop was this article in Discover Magazine that showcases a study suggesting that if you can sense a smile in a voice you’re hearing, and not someone you can see, you tend to smile back. The article and the study it links to–well done, consumer science journalism–discuss the lack of research on what constitutes this auditory smile. Checking the paper’s sources, I ended up here: “The vocal communication of different types of smile” in Speech Communication. The study is from 2008 and I’m not sure if I’m going to see what builds upon this research. But I was still curious to see who else out there was wondering, “did I just hear you smile?”

    Then I got here:

    “Smiling voices maintain [increased trust] even in the face of behavioral evidence of untrustworthiness.” (1)

    …and here:

    “We present an experiment in which participants played a trust game with a virtual agent that expressed emotion through its voice, in a manner congruent or incongruent with its behavior.” (1)

    …and here:

    “Using an investment game paradigm, we found that positive vocal emotional expression – smiling voice – increases participants’ implicit trust attributions to virtual agents, compared with when agents speak with an emotionally neutral voice. As previously observed, the monetary returns of the agent also affected implicit trust, so that participants invested more money in the agent that was behaving generously.”(1)

    And this is the point where I’ve saved the citation in Paperpile, sat back with my arms folded and leaned over to look down into the murky depths of this rabbit hole. I still don’t know what stimuli my brain is picking up that translates into “smile” after Michael says “Supreme Court sucks”, but I can pick up the danger of being able to simulate this in such a way that creates trust between yourself and stranger on the phone.

    This is more than just Cash Green’s white voice in Sorry to Bother You, this is the “right voice,” the one that flicks an unknown switch in your head and you picture a reassuring smile. The “right voice” is built upon the research that pull the secrets out of our brains and tools them for algorithmic benefit. The “right voice” won’t just relieve people of their hard-earned money, it will lead them astray, down paths not yet cut.

    What do I do? This digression has made me thoughtful. Sigh.


    (1) Torre, Ilaria, et al. “If Your Device Could Smile: People Trust Happy-Sounding Artificial Agents More.” Computers in Human Behavior, vol. 105, Apr. 2020, p. 106215. https://doi.org/10.1016/j.chb.2019.106215

  • What about an NFT of a tulip?

    “It’s a Ponzi scheme. When there was tulip mania, at least when you lost all your money, you still had a tulip.”

    Dennis Kelleher

    I watch cryptocurrency drama from the nosebleed seats. I have some shallow understanding of the system and, I’m not ashamed to say, I rely on my students to fill in some details for me if I’m curious and they’re willing. If you keep hearing about FTX and wondering what’s going on, this piece in The Atlantic by Annie Lowrey will give you an idea of the most recent meltdown.

  • Everything in content moderation…

    Content moderation is what Twitter makes — it is the thing that defines the user experience. It’s what YouTube makes, it’s what Instagram makes, it’s what TikTok makes. They all try to incentivize good stuff, disincentivize bad stuff, and delete the really bad stuff.

    From Nilay Patel’s “Welcome to hell, Elon” at The Verge

  • The Chaos Machine, by Max Fisher

    Cover of the book "The Chaos Machine" by Max Fisher
    This seems to be the most appropriate book to be reading right now. It, like any technology book, is dated already, but it’s a worthwhile dive into the media that drives our thinking.

  • Social Media Exile

    I don’t think social media is really all that healthy.

    I’ve been positive in dealing my own rhetoric, especially in class. I’ve tried to discuss its functions conceptually, that there are benefits if you curate well and hypervigilant, but the labor costs outweigh the benefit.

    I have been mindful of how I feel when I’m on a platform. Twitter is now where I feel the worst; Facebook is pretty neutral as I’ve culled my friends list down considerably. Instagram and TikTok are still relatively positive, if not actively negative. But I want to re-evaluate how *I* want to use platforms, what *I* want to say.

    Right now, dunno, ya know?

    This is going to be tricky as I read in Digital Composition and Rhetoric. There are more media out there than just social media channels, but like the sewage systems of most metropolitan areas, everything runs into them. Those channels are drivers of discourse now. We build cites based on how fast our shit flows underneath.

    (Perhaps that’s not the best metaphor – but you see what I mean.)
    I will be thinking a lot about how we pull back, as a society. How we maintain important connections, but not add to or get inundated by the garbage. Maybe it’s time to just go full in on a personal web-site. Keep all of my postings there and worry less about enGagEmeNT and more about cultivating a space for me.