10 Comments
User's avatar
John F Lang's avatar

I am adamantly against using AI for any purpose. It is rapidly inveigeling itself into our lives, replacing our abilities and initiative at every stage. I suppose it just takes too much fortitude for most people to reject it because they fear being left behind. One can only hope that once its malevolence is apparent, we will have the good sense to cast it off. However, as we become more and more dependent on it, that will become increasingly difficult. Hold onto your seats.

SlowlyReading's avatar

I've been heavily exploring it too, for historical research, but I find that all of the LLMs still regularly hallucinate quotations when I ask them to explore primary sources and look for relevant excerpts. They do this even when I write instructions like "Please double-check all quotations. Do not make up any quotations." It's actually bad enough that the majority of their so-called quotations from primary sources are vaguely authentic, but also partly invented, syntheses of what the author actually said. They do, however, point one in the right direction (i.e. to a particular source), but seemingly can do basically nothing to locate the relevant excerpts of that source.

Alastair's avatar

As someone who works in speciality domain Engineering (Data) I have been actively using LLM's since GPT-3.5 and the pace of change has been absolutely astonishing.

You have voice transcription, LLMs - apps like Superwhisper, which I'm using to dictate this note - that are so much better than the old-fashioned equivalents, like Dragon Dictate for a fraction of the price. Incredible OCR that means that now you can pass huge books for pennies with much higher accuracy than old software like Tesseract.

With the coding agents, I have built all sorts of tooling and personal projects that I never would have had a time to do otherwise - things that might once have been a few weeks of work and research now literally become a few hours, as you mentioned in your article Aaron.

I will say, and I do think that this will be a permanent aspect, that there will always be a degree of management required for these tools. Just like Excel was __supposed__ to eradicate the accountant by making accountancy work accessible to anybody that could use a computer. So I think LLMs will make software substantially easier to produce but you still need to understand what makes good software if you're actually going to deploy it beyond just your own local instance.

The future we were promised in the 50s might still come to pass.

Matthew Stanley's avatar

I've been using AI to translate public domain German texts from the late 1800's. Mistral's OCR rips the text out of a pdf, then I use a python script (written by Claude) to call the DeepL translation API, which gives me a markdown file with the translated book. I'm currently preparing one for publication on Amazon, as it has never had an English translation before.

Brian Marr's avatar

Eastern orthodoxy doesn't have a defined doctrine of atonement. Girard has more in common with modern liberal theology than anything authentically eastern.

Christopher Johnson's avatar

Aaron,

On both this and the podcast, you've used the phrase "left behind," as in those who do not jump on AI now are at risk of being "left behind."

I've been pondering that phrase, and can't quite figure out to whom that advice is being given. Is this primary for workers in the information economy (like yourself), or more general?

By analogy, I'm thinking of other technologies - the automobile, the telephone, the internet. In some sense, the business owners who did not embrace the new technologies were "left behind." But in another sense, for the average Joe, the new technology will eventually find him when it is mature, and early adoption probably makes little difference.

Or am I thinking about this wrong?

Aaron M. Renn's avatar

For some people, that's surely the case. For others, they could easily get disrupted out of their job before they know what hit them.

Christopher Johnson's avatar

I'm a physician in a midsized specialty group.

When I think about my work, I have some hope that AI may be able to make documentation more efficient. We have some partners trialing it. It works well, but the cost is still pretty high to implement across the board. Seems like it's going to be the way to go, though.

I'm not sure who in our company is going to get left behind by AI. It's possible that some of the billing department could be automated. Our scheduler could use better tools given the complexity of the task. I could imagine that we might have better medical records review and chart preparation, though this wouldn't take any jobs, just allow people to do them better. I don't think we'll replace the friendly front desk ladies with kiosks.

I guess what I'm trying to figure out when I hear the constant drumbeat that I need to get on the AI train asap, whether I'm right to be a little skeptical it applies or if I'm just lacking the imagination and vision to see where it is going in my field.

Jared Penner's avatar

I’m a military physician looking to move to civilian in a few years,…good to hear a report of use cases

Spouting Thomas's avatar

Enjoyed hearing about these uses. I'm also trying to figure out how to max out my Claude Max subscription. Also enjoyed that discussion from your former colleague.

When it comes to things like podcast transcripts, one trick I started employing is giving a required word count (relative to the original) when asking for a summary.

50% of the original's word count, with an added instruction not to leave out a single point, is usually pretty good these days. From what I can tell, this really just leaves out the filler. Most of the time, I prefer reading these over reading the original transcript.

25% word count amounts to a detailed summary, but some points might be missed.