How would this prevent someone from just plugging ElevenLabs into it? Or the inevitable more realistic voice models? Or just a prerecorded spam message? It's already nearly impossible to tell if some speech is human or not.
I do like the idea of recovering the emotional information lost in speech -> text, but I don't think it'd help the LLM issue.
Or also a genuine human voice reading a script that’s partially or almost entirely LLM written? I think there must be some video content creators who do that.
> I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.
> Why this exists: AI-generated content is drowning social media.
> Real-time transcription
... So you want to filter out AI content by requiring users to produce audio (not really any harder for AI than text), and you add AI content afterward (the transcriptions) anyway?
I really think you should think this through more.
The "authenticity" problem is fundamentally about how users discover each other. You get flooded with AI slop because the algorithm is pushing it in front of you. And that algorithm is easily gamed, and all the existing competitors are financially incentivized to implement such an algorithm and not care about the slop.
Also, I looked at the page source and it gives a strong impression that you are using AI to code the project and also that your client fundamentally works by querying an LLM on the server. It really doesn't convey the attitude supposedly motivating the project.
So you're going to reject recordings detected as computer generated, or human recorded from a computer-generated script?
I feel like you are making your users jump through hoops to do bot and slop detection, when you ought to be investing in technology to do the same. Here is a focusing question: would you still demand audio recordings if you had that technology?
Maybe you will court an interesting set of users when you do this? I just know I will not be one of them; ain't got time for that. Good luck.
How would this prevent someone from just plugging ElevenLabs into it? Or the inevitable more realistic voice models? Or just a prerecorded spam message? It's already nearly impossible to tell if some speech is human or not. I do like the idea of recovering the emotional information lost in speech -> text, but I don't think it'd help the LLM issue.
Detecting "human speech" means shutting out people who cannot speak and rely on TTS for verbal communication.
Also speech impediments, accents, physical disabilities, etc etc.
Tech culture just refuses to even be aware of people as physical beings. It's just spherical users in a vacuum and if you don't fit the mold, tough.
True. However making voice input has higher friction than typing chatgpt write me a reply.
Or also a genuine human voice reading a script that’s partially or almost entirely LLM written? I think there must be some video content creators who do that.
Cool idea! You should make it so that I can only play one audio message at once (currently if I click to start two, they both play simultaneously)
Impressive tech execution, but the format has fundamental scaling issues.
Clubhouse lost 93% of users from peak. WhatsApp sends 7 billion voice messages daily - but those are DMs, not feeds.
The math doesn't work: reading is 50-80% faster than listening. You can skim 50 text posts in 100 seconds. 50 voice posts? 15 minutes.
Voice works async 1-to-1. You built Twitter where every tweet is a 30-second voicemail nobody has time to listen to.
The transcription proves it - users will read, not listen. Which makes this "text feed with worse UX"
> I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.
> Why this exists: AI-generated content is drowning social media.
> Real-time transcription
... So you want to filter out AI content by requiring users to produce audio (not really any harder for AI than text), and you add AI content afterward (the transcriptions) anyway?
I really think you should think this through more.
The "authenticity" problem is fundamentally about how users discover each other. You get flooded with AI slop because the algorithm is pushing it in front of you. And that algorithm is easily gamed, and all the existing competitors are financially incentivized to implement such an algorithm and not care about the slop.
Also, I looked at the page source and it gives a strong impression that you are using AI to code the project and also that your client fundamentally works by querying an LLM on the server. It really doesn't convey the attitude supposedly motivating the project.
Nice tech demo though, I guess.
Neat idea! Not sure if I'm willing to register just try it, though. Having the main feed public would be nice! Or even a sample feed.
That's a good call. While there's no general public feed, individual profiles are public. For example, here's mine: https://voxconvo.com/siim
So you're going to reject recordings detected as computer generated, or human recorded from a computer-generated script?
I feel like you are making your users jump through hoops to do bot and slop detection, when you ought to be investing in technology to do the same. Here is a focusing question: would you still demand audio recordings if you had that technology?
Maybe you will court an interesting set of users when you do this? I just know I will not be one of them; ain't got time for that. Good luck.
Did you ever use AirChat?
Idea is cool, but the STT is bad (at least with an accent), and the fact that you need to edit each word is too cumbersome
“Sign in with Google”
:grimace:
Sorry, but I have to pass.