I guess there are probably a lot of people trading that stuff dumb enough to be networking on facebook and instagram with their real identities
I guess there are probably a lot of people trading that stuff dumb enough to be networking on facebook and instagram with their real identities
The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used.
Any chance that’s the real reason and not just a flimsy excuse? What kind of information would you even need a fake identity to gather from a public forum?
changing how its “block” button works. That option previously allowed users to hide their profile from certain accounts – but will no longer do so.
So I guess all that stuff they did to lock down the ability to see things on Xitter without an account was strictly for evil then
If you are at the point where you are having to worry about government or corporate entities setting traps at the local library? You… kind of already lost.
What about just a blackmailer assuming anyone booting an OS from a public computer has something to hide? And then they have write access and there’s no defense, and it doesn’t have to be everywhere because people seeking privacy this way will have to be picking new locations each time. An attack like that wouldn’t have to be targeted at a particular person.
Isn’t it risky plugging usb drives into untrusted machines?
This is the good kind of AI that’s actually useful instead of the BS AI like LLMs
lol, trying to hedge against downvotes from the anti-AI crowd?
I doubt the school administrators who would be buying this thing or the people trying to make money off it have really thought that far ahead or care whether or not it does that, but it would definitely be one of its main effects.
it may be moral in some extreme examples
Are they extreme? Is bad censorship genuinely rare?
but there are means of doing that completely removed from the scope of microblogging on a corporate behemoth’s web platform. For example, there is an international organization who’s sole purpose is perusing human rights violations.
I think it’s relevant that tech platforms, and software more generally, has a sort of reach and influence that international organizations do not, especially when it comes to the flow of information. What is the limit you’re suggesting here on what may be done to oppose harmful censorship? That it be legitimized by some official consensus? That a “right to censor” exist and be enforced but be subject to some form of formalized regulation? That would exempt any tyranny of the most influential states.
I’m going to challenge your assertion that you’re not talking about
You can interpret my words how you want and I can’t stop you willfully misinterpreting me, but I am telling you explicitly about what I am saying and what I am not saying because I have something specific I want to communicate. When you argue that
I believe each country should get to have a say in what is permissible, and content deemed unacceptable should be blockable by region
In the given context, you are asserting that states have an apparently unconditional moral right to censor, and that this right means third parties have a duty to go along with it and not interfere. I think this is wrong as a general principle, independent of the specific example of Twitter vs Brazil. If the censorship is wrong, then it is ok to fight it.
Now you can argue that some censorship may be harmful because of its impact on society, such as the removal of books from school hampering fair and complete education or banning research texts that expose inconvenient truths.
Ok, but the question is, what can be done about it? Say a country is doing that. A web service defies that government by providing downloads of those books to its citizens. Are they morally bound to not do that? Should international regulations prevent what they are doing? I think no, it is ok and good to do, if the censorship is harmful.
Since my argument isn’t about what should be censored, I’m intentionally leaving the boundaries of “harmful censorship” open to interpretation, save the assertion that it exists and is widely practiced.
I also think that any service (twitter) refusing to abide by the laws of a country (Brazil) has no place in that country.
That could be true in a literal sense (the country successfully bans the use of the service), or not (the country isn’t willing or able to prevent its use). Morally though, I’d say you have a place wherever people need your help, whether or not their government wants them to be helped.
If a government is imposing harmful censorship I think supporting resistance of that censorship is the right thing to do. A company that isn’t located in that country, ethically shouldn’t be complying with such orders. Make them burn political capital taking extreme and implausible measures.
Can anyone recommend any cool mods/projects built on top of Minetest?
The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.
The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.
The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.
The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.
They are deterministic though, in a literal sense. Rather their behavior is undefined. And yes, a LLM is not a person and it’s not quite accurate to talk about them knowing or understanding things. So what though? Why would that be any sort of evidence that research efforts into AI safety are futile? This is at least as much of an engineering problem as a philosophy problem.
It’s important though because if that’s the real reason Google pays them, they could come up with some other excuse to give them the money.
This one can do that stuff: https://github.com/huchenlei/ComfyUI-layerdiffuse?tab=readme-ov-file
Seems broken
We’re unable to submit your comments to congress because of a problem on our end. We apologize for the inconvenience. Please try again later.
I don’t think that person is bragging, just saying why it’s useful to them
Why do you need special qualifications to work for Door Dash lol
But I think the point is, the OP meme is wrong to try painting this as some kind of society-wide psychological pathology, when it’s rather business people coming up with simple reliable formulas to make money. The space of possible products people could want is large, and this choice isn’t only about what people want, but what will get attention. People will readily pay attention to and discuss with others something they already have a connection to in a way they wouldn’t with some new thing, even if they would rather have something new.