deleted by creator
deleted by creator
“I’ve created this amazing program that more or less precisely mimics the response of a human to any question!”
“What if I ask it a question where humans are well known to apply all kinds of biases? Will it give a completely unbiased answer, like some kind of paragon of virtue?”
“No”
<Surprised Pikachu face>
After checking that you can open port 53 udp yourself with, say, nc (which you tried), strace the binary that tries to open port 53 and fails, and find the system call that fails. You can compare it with an strace on nc to see how it differs.
If this doesn’t clue you in (e.g., you see two attempts to listen to the same port…) Next step would be to find in the source code where it fails (look for the error message printout) and start adding diagnostic printouts before the failing system call and compile and run your edited version.
I really wish there were an “adult difficulty” setting to pick instead of ‘easy’. I don’t have hours to waste on hordes of “difficult” enemies that just slows progress and pads the playtime. Nor do I want a walking simulator where the boss just falls over with no need for anything beyond the most basic game mechanics. Give me an option to experience the story with an interesting challenge without wasting my time, dammit!
This is my guess as well. They have been limiting new signups for the paid service for a long time, which must mean they are overloaded; and then it makes a lot of sense to just degrade the quality of GPT-4 so they can serve all paying users. I just wish there was a way to know the “quality level” the service is operating at.
Was this around the time right after “custom GPTs” was introduced? I’ve seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.
What are you talking about? Amazon’s digital video purchases don’t require any monthly access fee. He paid £5.99 with the idea that he’ll get to keep it indefinitely, just like a physical DVD. I don’t get why you think it is ok for a seller to revert the sale of a digital item at any time for just the purchase price + £5 but (I presume?) not other sales?
Nowhere do they use terms like “rent” or “lease”. They explicitly use terms like “buy” and it’s not until the fine print that the term license even comes up.
This! It really should be illegal to present something with the phrasing “buy” unless it is provided to you via a license that prevent it from being withdrawn. To “sell” cloud hosted media without having the licensing paperwork in place for it to be a sale is fraud.
Are you fine with me taking anything from your home as long as I pay you the purchase price + £5? Some of us assign a greater value to some of the things we own than the purchase price.
People losing media this way should sue, with the argument that it was presented to end users as a “sale”, and it is not sufficient to merely compensate someone with the purchase price to undo a sale. Companies “selling” digital products should be forced to write agreements that allow them to redistribute content indefinitely.
It’s because the licence holder of the movie decided Amazon can’t show it anymore. Perhaps they were asking Amazon to pay a high fee and it wants worth it.
I get that this is what the license holder wants. But, why can’t we just put into law that a license is not needed for a company to host, retransmit and play copyrighted media on behalf of a user once the license holder has been compensated as agreed for a sale?
There’s an old saying in Tennessee — I know it’s in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can’t get fooled again.
A few things:
Unity is still bleeding money. They have a product that could be the basis for a reasonably profitable company, but spending billions on a microtransaction company means it is not sufficient for their current leadership. It doesn’t seem wise to build your bussniess on the product of a company whose bussniess plan you fundamentally disagree with.
It would be the best for the long term health of bussniess-to-bussnies services if we as a community manages to send the message that it doesn’t matter what any contract says - just trying to introduce retroactive fees is unforgivable and a death sentence to the company that tries it.
AND add a clause to the TOS banning retroactive updates of TOS to existing games.
I understand LLaMA and some other models come with instructions that say that they cannot be used commercially. But, unless the creators can show that you have formally accepted a license agreement to that effect, on what legal grounds can that be enforceable?
If we look at the direction US law is moving, it seems the current legal theory is that AI generated works fall in the public domain. That means restricting their use commercially should be impossible regardless of other circumstances - public domain means that anyone can use them for anything. (But it also means that your commercial use isn’t protected from others likewise using the exact same output).
If we instead look at what possible legal grounds restrictions on the output of these models could be based on if you didn’t agree to a license agreement to access the model. Copyright don’t restrict use, it restricts redistribution. The creators of LLMs cannot reasonably take the position that output created from their models is a derivative work of the model, when their model itself is created from copyrighted works, many of which they have no right to redistribute. The whole basis of LLMs rest on that “training data” -> “model” produces a model that isn’t encumbered by the copyright of the training data. How can one take that position and simultaneously belive “model” -> “inferred output” produces copyright encumbered output? That would be a fundamentally inconsistent view.
(Note: the above is not legal advice, only free-form discussion.)
AutoNomous Ultra inStinct ram
Cortana is/was by far the best name of the digital assistants - probably because it was created by sci-fi story writers rather than a marketing department. They should just have upgraded her with the latest AI tech and trained her to show the same kind of sassy personality as in the games and it would have been perfect.
Who in their right mind thinks “Bing copilot” is a better name? It makes me picture something like the blow-up autopilot from Airplane!
I like calling it “x-twitter”, as it is short and makes sense when reading it out.
So, what you are saying is that by checking this trust API, we can filter out everyone running unaltered big-media approved browsers and hardware? We’d end up splitting the web into two disjoint parts, one for big corporate and sheeple - and one more akin to the web of old comprised by skilled tech people and hobbyists? A rift that could finally bring an end to eternal September? … Are we sure this proposal a bad thing?
Reading.
We need everyone to read more books. A wide variety of stories on a wide variety of topics by a wide variety of authors, all with different backgrounds and ideas. We must read stories that let us temporarily step into the mind and experiences of other people, who aren’t us, to train our brains the ability to understand the plights of others. Books of human stories, as opposed to movies, doom-scrolling TikTok, etc., seems uniquely suited for this kind of training of empathy, because the stories are executed inside our own brains.
I’m willing to bet that these why-are-the-leopards-suddenly-eating-my-face? the-only-moral-abortion-is-my-abortion type people have read distinctly less, or at least far less varied, stories than us who look at them and wonder how it is possible to be so unable to put themselves in the shoes of anyone but themselves.