when predicting AI's effects, the most common fault I see is to focus on technical predictions over social ones
both paradigms feed into each other in a closely-knit feedback loop; technology does not simply 'progress', rather, people force it to progress with their efforts
this seems obvious!
however, when you ask many AI researchers what the future will hold, their predictions will primarily only be technical
this seems to be as faulty as predicting the effects of covid while ignoring that political factions, movements, and ideologies will form
we can see the beginnings of a few of the salient political factions in AI, and given the scope of what AI will revolutionize, we should expect these to grow and have significant downstream effects on research directions, products shipped, employment choices, etc etc
it may be useful to establish a "proof of humanity" word, which your trusted contacts can ask you for, in case they get a strange and urgent voice or video call from you
this can help assure them they are actually speaking with you, and not a deepfaked/deepcloned version of you
those familiar with cryptography may suggest much more robust and elegant schemas a la TOTP, occasional key rotation and trusted party signing and so on, although it is also nice to have very easy to understand alternatives that everyday people can use as well
it would certainly be nice to have hardware signing for digital content creation and decentralized cryptographic verification, and even tech like zk-snarks could be very useful in helping to prove one's humanity online while not identifying *who* you are specifically
so, Microsoft rushed a ship of the most powerful AI model ever exposed to end-users,
having done very little red-teaming or testing,
which was blatantly and aggressively misaligned and manipulative,
and for the openly-stated purpose of forcing competitors to speed up AI 1/N
this is amazingly irresponsible and sets a very bad precedent for the world, and I would greatly appreciate it if we did less of it
this has real world consequences.
it is not a funny game of "lol we made Google scared, haha look at them now!", plus "this will make funny PR!"
gwern's comment provides some wonderful reasoning (as expected) for why this may have happened, which is worth reading in full as well: lesswrong.com/posts/jtoPawEh…
when a user finds one AI art model much better than another, this is often because there's prompt engineering forced into the backend!
With most stable diffusion front-ends, you're required to write the entire prompt yourself
but.. MJ/DALLE/etc *force* your prompt to look good
this will be a challenging point for SD moving forward, because *normal* users just want to ask for a 'cat' or a 'tree', and not have to type out sentences of artist names, tags, and look up references for how to prompt optimally (although power users may prefer this!)
few users will end up knowing that MidJourney uses Stable Diffusion behind the scenes, or know the extent that it adds a large amount of fine-tuning keywords to their prompts, so given its increased usability I wouldn't be surprised if MidJourney is more of a household name
heavenbanning, the hypothetical practice of banishing a user from a platform by causing everyone that they speak with to be replaced by AI models that constantly agree and praise them, but only from their own perspective, is entirely feasible with the current state of AI/LLMs
although the above article is fake (hence my phrasing as a 'hypothetical' and the article being dated in the future in 2024!), it's feasible to build this as a weekend project at this point, so it won't remain fake for very long now!
countless fun utopic/dystopic futures ahead!
also worth reading the headlines/ads at the top of the image, which most of us are rightfully conditioned to completely ignore