Love it! About practical application #1... Humans do pattern recognition also. In fact, isn't the literary act (or is it the rhetorical act???) in large part about the human discovery of pattern in the text? Isn't writing an act of discovery? And isn't that discovery deeply connected to the worldly context? In terms of technical writing, isn't that discovery part of the writer's value to the enterprise? So my question is, how do we exploit LLM pattern recognition without diluting the experience of discovery? What classes of pattern are more fruitfully discovered by the machine than by the writer?
Maybe I would add ... pattern recognition at scale. I think there is a distinction to be made between the kinds of patterns AI picks up and the human discovery aspect you speak. I'm not sure I can articulate what that is at the moment.
One thing that does come to mind is associative. I don't think AI is great at making new associative connections without human interaction.
Just shooting from the hip, but I think there's something important about feel. Damasio points out that what we call thought starts with feeling (emotion). As for recognizing patterns and making associations, it makes sense that animals do that by feel. Sure, it gives us all sorts of erroneous results, but we don't call that hallucination -- delusion, superstition, but not hallucination. But maybe feeling is why we're more flexible and energy efficient. And maybe that's another line of distinction... Feel is organic, while machines track statically defined weights.
Another thought occurs... You might be able to say that the process of model embedding is pattern recognition (but maybe not...). But LLM output isn't recognition so much as pattern response. I think that might be an important distinction.
I think that all makes sense. I wish we would focus more on this than trying to catch people using AI or trying to prove how smart (or unsmart) AI is. 😆
Love it! About practical application #1... Humans do pattern recognition also. In fact, isn't the literary act (or is it the rhetorical act???) in large part about the human discovery of pattern in the text? Isn't writing an act of discovery? And isn't that discovery deeply connected to the worldly context? In terms of technical writing, isn't that discovery part of the writer's value to the enterprise? So my question is, how do we exploit LLM pattern recognition without diluting the experience of discovery? What classes of pattern are more fruitfully discovered by the machine than by the writer?
Maybe I would add ... pattern recognition at scale. I think there is a distinction to be made between the kinds of patterns AI picks up and the human discovery aspect you speak. I'm not sure I can articulate what that is at the moment.
One thing that does come to mind is associative. I don't think AI is great at making new associative connections without human interaction.
Just shooting from the hip, but I think there's something important about feel. Damasio points out that what we call thought starts with feeling (emotion). As for recognizing patterns and making associations, it makes sense that animals do that by feel. Sure, it gives us all sorts of erroneous results, but we don't call that hallucination -- delusion, superstition, but not hallucination. But maybe feeling is why we're more flexible and energy efficient. And maybe that's another line of distinction... Feel is organic, while machines track statically defined weights.
Another thought occurs... You might be able to say that the process of model embedding is pattern recognition (but maybe not...). But LLM output isn't recognition so much as pattern response. I think that might be an important distinction.
I think that all makes sense. I wish we would focus more on this than trying to catch people using AI or trying to prove how smart (or unsmart) AI is. 😆