Edit to add: I also found someone who recorded a voice chat of the same thing. This isn’t that someone uploaded a song, or that AI didn’t actually process the file. These models really are this sycophantic:
Edit to add: I also found someone who recorded a voice chat of the same thing. This isn’t that someone uploaded a song, or that AI didn’t actually process the file. These models really are this sycophantic:
The reality is that it is a next word prediction machine. There probably aren’t any examples in the training data where people are writing music reviews on something that isn’t music. It probably interprets the sound as best as it can as “music” (and the best it can do this is likely very bad in the first place), and then, since the prompt was about reviewing music, it uses next word prediction to write a music review, which of course turns out looking like a typical music review. It’s not really interpreting the sound as “not music” especially since you told it it IS music.
True, but the principle behind the post is the beauty here. When not using the API, it costs these companies an unsustainable amount of money to make their models listen to fart sounds. I don’t use any AI myself, but I support anyone who wants to abuse the flat monthly subscription to make a company burn through money so that a plagiarism model can praise fart sounds.