
I already have some issues with my public tone sounding… Too official. Using the em-dash just makes it seem like I might be a bot. I’m not going to bother with that.

I already have some issues with my public tone sounding… Too official. Using the em-dash just makes it seem like I might be a bot. I’m not going to bother with that.

My response was a joke. You don’t have to clarify anything. You’re just taking it too seriously. It’s cool man. I’m not mad or anything.

The barista and the barmaid don’t love you man. They don’t love you. I don’t care if you flirt and they smile. They are doing a job. It’s a transaction. Don’t get in your feelings and do something you’ll regret just because she makes a nice latte.
I do want a Batrick plushie though. . .

In practice the justice system actually is reactionary. Either the actuality of a crime or the suspicion of a crime being possible allows for laws to be created prohibiting that crime, marking it as criminal, and then law enforcement and the justice system as a whole investigate instances where that crime is suspected to be committed and litigation ensues.
Prevention may be the intent, but the actuality is that we know this doesn’t prevent crime. Outside the jurisdiction of any justice system that puts such “safeguards” in place is a place where people will abuse that lack of jurisdiction. And people inside it with enough money or status or both will continue to abuse it for their personal gain. Which is pretty much what’s happening now, with the exception that they have realized they can try to preempt litigation against them by buying the litigants or part of the regulatory/judicial system.

Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.
LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.
Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.
Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.

This has “people don’t understand that you don’t fall in love in the strip club” vibes. Like. The stripper does not love you. It’s a transactional exchange. When you lose sight of that, and start anthropomorphizing LLM’s (or romanticizing a strip tease), you are falling into a trap that will allow chinks in your psychological armor to line up in just the right way to act on compulsions or ideas that you wouldn’t normally.

I like the em dash and am very upset that AI has stolen it.
I guess you could say it’s ironic that some LGBTQ people are artists and creators and yet a magazine purporting to support and represent them used AI instead of drawing on the community for cover art.
I probably have pretty close to the top of the curve with 1140mbps up/down according to my plan. In actuality though my speed test reads at 864 up and 859 down.

Thank you. That’s what I wanted to know.
Depends. I often click on articles based on the summary because the article link is usually posted before the summary is. Sometimes the summary doesn’t really explain enough for me to understand. Other times I want to know more. But when you use chatgpt to answer a query usually you don’t leave that page in order to get more information and that’s the problem I’m pointing out. Usually you don’t even have a link to where the information in the summary came from either (my experience is limited to Google’s Gemini, which I don’t use, but which for a while was front and center on any query I typed in).
Not exactly. People don’t click on ads when ads are blocked. But ad aggregation companies get paid in a couple of different ways. Click through is a big one, but ad impressions (eyeballs that supposedly viewed an ad) are also a thing. And impressions pay, just not as well as clickthroughs. Ad companies haven’t stopped paying aggregates for ad space. That’s why ads on paid services have gotten more egregious. It’s not because they aren’t getting paid. It’s because they want both.
For what it’s worth, you can (and some do) pay for subscriptions to websites or services on the internet. But nobody is paying ad aggregation companies with the intent of seeing ads regardless of the reality.
Also, ad blocking as a whole is for security as much as it is for quality of life. Ad aggregation companies have a habit of taking the money and asking questions only when they get complaints (if then) and as a result, they don’t leave users who want to protect themselves another choice.
Of course, there’s also the fact that one way or another the web can’t just be free. Someone somewhere has to pay for the resources that make it run and the upkeep it requires.
The thing that’s mostly wrong with AI summaries is that people don’t click through to the page the summary summarizes. So those sites don’t get ad revenue. That’s ad revenue is the backbone of the internet for a lot of sites. If there’s no site posting the information then the AI has nothing to summarize and provide an overview of. The pivot to AI LLM’s is likely to kill the companies who aggregate links, and they’re pushing for it hoping to make it profitable in the long term because they’ve been actively enshittifying ad aggregation via search for the purposes of big number must go up (you know, for the shareholders). It’s defeatist to the current business model of most of the internet. And the shareholders do not care so long as they get their money.

Would implementing something like this prevent this problem?

I need more information. How is the malware being distributed to these devices? How can we check if our credentials are in this dump? Shouldn’t the respective platforms be doing due diligence to notify those effected and asking them to change their passwords?
I feel it may be fairly likely that this inforstealer Malware is the type distributed by dubious apps the play store and similar have had to take down but aren’t actively notifying users who installed them. Is it predominantly phones that are effected or is this malware PC based? Changing your passwords is important but sounding the alarm with no actual information is just… Ill advised. It’s fear mongering.
I believe this is what was referenced in the Night Watch books by Terry Prachet. I have found reading this article to be both horrifying and fascinating.
Could compete how? He gonna buy infrastructure? He gonna be an MVNO? (None of whom are really competitive with the big players because a lot of them are either regional, have to buy batches of data and minutes from the big three, or have pretty bad service). And who’s going to buy it? His supporters? I doubt that. This is just another grift.
I want you to explain to me how when Google does it (allowing anyone with an app to report a speed trap - you know where law enforcement is present) it’s legal but when some random developer who’s not a multi-million dollar Corp does it, it’s illegal and obstruction.
I’ll wait for your list of case law.