To be completely fair, even open-source AIs are a little bit of a black box due to the way neural networks work - but I’d greatly appreciate if we at least knew the parameters on which it is trained.
It is absolutely possible to train all sorts of biases in a closed-source AI, and that’s what would be very hard in an open-source model. You can roughly set up outputs at whatever. In other ways, using open-source practically removes the malicious human factor (without removing positive impact)
Open-source models also can’t be restricted, paywalled or limited in any meaningful way, which is also vital.
For text, I’d go with HuggingChat based on open-source Llama model. Previously there was Open Assistant, but it got closed. For pictures, renowned Stable Diffusion is the way to go. For music - Stable Audio, respectively.
Please note that none of them are GPL-licensed, so while they are open-source, they can sadly get commercialized in some form in the future. Also, while models are free, in order to meaningfully use them you have to either go through their service (which may annoy with registration, or even take payment for premium features), or train the model yourself (which is unrealistic for a home user). So this is still far from perfect, but it’s miles ahead of trash options from the original comment.