Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle




  • You should read the ruling in more detail, the judge explains the reasoning behind why he found the way that he did. For example:

    Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for training or learning as such. Everyone reads texts, too, then writes new texts. They may need to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.

    This isn’t “oligarch interests and demands,” this is affirming a right to learn and that copyright doesn’t allow its holder to prohibit people from analyzing the things that they read.




  • Argues for the importance of student essays, and then:

    When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated.

    I remember student essays as being soul-crushing vapid toiling, personally.

    The author is very fixated on the notion that these essays are vital parts of human education. Is he aware that for much of human history - and even today, in many regions of the world - essay-writing like this wasn’t so important? I think one neat element of AI’s rise will be the growth of some other methods of teaching that have fallen by the wayside. Socratic dialogue, debate, personal one-on-one tutoring.

    I’ve been teaching myself some new APIs and programming techniques recently, for example, and I’m finding it way easier having an AI to talk me through it than it is grinding my way through documentation directly.


  • I’m interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don’t yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it’ll be a giant mess of contradictions.

    Sure, the existing corpus of knowledge doesn’t all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it’d see a higher order consistency to this. If anything that’ll help it understand that there are different views in the world.





  • Read the article.

    Kehoe countered that the AI system would interact only with nonemergency callers and that emergency calls to 911 would be routed only to human dispatchers. In fact, she added, “on nonemergency calls, it might detect those elevated stress levels [for callers] and it will automatically default going to a human being as well.”

    “There are a lot of safeguards,” Kehoe added, “to ensure that even with the tiniest bit of doubt, we don’t have someone just sitting on the phone and not getting help.”

    The AI system will only reroute calls that it can determine are not emergency calls. The default will be to let the calls through to the human staff. It’s not going to be some sort of primitive “press 1 if you are currently on fire” menu system.