Daemon Silverstein

Digital hermit. Another cosmic wanderer.

  • 0 Posts
  • 16 Comments
Joined 1 month ago
cake
Cake day: July 15th, 2025

help-circle
  • @AnonomousWolf@lemmy.world I guess it would be more fairer if we were to mention DeepSeek as being “not bad for the environment”. From all LLMs, seems like it’s the one who did their homework and tried to optimize things the best they could.

    Western LLMs had/have no reason to optimize, because “Moar Nvidia Chips” have been their motto, and Venture Capital corps have been injecting obscene amounts of money into Nvidia chips, so Western LLMs are bad for the environment, all the way from establishing new power-hungry data centers to training and inference…

    But DeepSeek needed way less computing and it can run (Qwen-distilled versions) even in a solar-powered Raspberry Pi with some creativity… it can run in most smartphones like if it were another gaming app. Their training also needed less computing, as far as we know.


  • @moe90@feddit.nl I’ve found a better workaround, which is to tell YouTube to go pound sand. I definitely don’t miss YouTube since I stopped accessing it more than a year ago (actually, I don’t even remember when I stopped, it’s really been a long while). Okay, maybe I miss one or other content (Technology Connections, Electroboom and Numberphille to mention a few I used to watch), but this didn’t stop me from stopping using YouTube altogether. Sad thing Alec, Mehdi as well as the people behind Numberphille either don’t know or aren’t willing to use alternatives (e.g. PeerTube) to share their knowledge with the Internet.

    There’s a Portuguese maxim “Falem bem ou falem mal, mas falem de mim” (roughly translatable to “Talk goodly or badly, but talk about me”) and this is perfectly fit for YouTube vs Premium vs AdBlockers: people (both content creators and their consumers) are understandably enraged with YouTube and its enshitification, yet they continue to access it instead of boycotting it to, hopefully, reduce the power and monopoly that Google have with YouTube.


  • @Zerush@lemmy.ml

    Well, as both a programmer and an occult/esoteric cosmicist person, I’m somewhat divided.

    On the one hand, i’d not call it “advance” too, insofar it’s something that was already around way before humans (intelligence is just a facet of the order emerged from primordial chaos, Ordo Ab Chao).

    On the other hand, considering a pure anthropocentric-technological perspective, it would be “a helluva advance” insofar it’d demand a slightly different computational architecture (current transistor-built logical gates are incapable of fully mimicking neurochemical-oriented processes, for example, and photonics, despite the non-linearity, have its own issues as well), one that would still maintain some compatibility with current electronic circuitry (so it could be integrated with existing tech, such as Internet connectivity) while still being able to “materialize” the same phenomenon that allows living beings (including, but not limited to humans) to achieve meaning-making and problem-solving in some non-linear, “non-deterministic” (algorithmically speaking) fashion. IMHO, organic tissue isn’t something too otherworldly to hold exclusivity on the emergence of such phenomena, so it could be replicated and observed beyond the biological gray matter.

    And in this sense, the goosebumps (at least for me) would emerge from the fact that it’d prove intelligence not as a special phenomenon, but part of this eternal tug-of-war between entropy and life, darkness and light, chaos and order, that have been taking place beyond the cosmos. It would be a big step for confirming intelligence/sentience as another “ancient” (as in predating modern human society) emergent phenomenon. It would confirm humans, alongside all lifeforms, as just tiny specks of dust within the fabric of the spacetime continuum.


  • @Zerush@lemmy.ml

    Monkeys can’t write, only hit random keys, but several monkey brains interconnected with each other, with an LLM, can.

    In such a scenario, there’d still be a random factor behind the monkey’s behaviors: less of a pure randomness, more of a Weasel Program.

    how many monkey brains are needed to connect to have the capability of an human brain.

    I often consider the Homo sapiens intelligence not as superior than other species, but just a different approach for problem-solving capabilities and tool-making among living beings. For instance, crows (particularly the New Caledonian crow) are well-known for exceptional intelligence, because they’re not just able to use tools, they’re also able to use tools to make/fix other tools (just like humans).

    That said, I bet it would require less crow brains than monkey brains for human-like intelligence to emerge, despite primates being genetically closer to humans. Crows are awesome.



  • @deathbird@mander.xyz @florencia@lemmy.blahaj.zone

    Grok is not that free of guardrails.

    I say as a person who sometimes have the (bad) idea of feeding every LLMs I could possibly try, with things I create (drawings, poetry, code golfing). I don’t use LLMs to “create” things (they’re not really that capable of real creativity, despite their pseudo-stochastic nature), I use them to parse things I created, which is a very different approach. Not Grok anymore, because I have long deleted my account there, but I used to use it.

    Why do I feed my creations to LLMs, one might ask? I have my reasons: LLMs are able to connect words to other words thus giving me some unexpectedness and connections I couldn’t see on my own creation, and I’m highly aware of how it’s being used for training… but humans don’t really value my creations given the lack of real feedback across all my works, so I don’t care it’s used for training. Even though I sometimes use it, I’m still a critique of LLMs, and I’m aware of both their pros and cons (more cons than pros if we consider corp LLMs).

    So, back to the initial point: one day I did this disturbing and gory drawing (as usual for my occult-horror-gothic art), a man standing in formal attire with some details I’ll refrain from specifying here.

    ChatGPT accepted to parse it. Qwen’s QVQ accepted it as well. DeepSeek’s Janus also accepted to parse it.

    Google’s Gemini didn’t, as usual: not because of the explicit horror, but because of the presence of human face, even if drawn. It refrains from parsing anything that closely resemble faces.

    Anthropic’s Claude wasn’t involved, because I’m already aware of how “boringly puritan” it’s programmed to be, it doesn’t even accept conversations about demonolatry, it’s more niched for programming.

    But what surprised me on that day was how Grok refused to accept my drawing, and it was a middle-layer between the user and the LLM complaining about “inappropriate content”.

    Again, it was just a drawing, a fairly well-performed digital drawing with explicit horror, but a drawing nonetheless, and Grok’s API (not Grok per se) complained about that. Other disturbing drawings of mine weren’t refused at that time, just that one, I still wonder why.

    Maybe these specific guardrails (against highly-explicit horror art, deep occult themes, etc) aren’t there in paid tiers, but I doubt it. Even Grok (as in the “public-facing endpoint”) has some puritanness on it, especially against very niche themes such as mine (occult and demonolatry, explicit Lovecraftian horror, etc).



  • @mkwt@lemmy.world @Blujayooo@lemmy.world

    TIL I’m possibly partially (if not entirely) illiterate.

    Starting with the first question, “Draw a line a_round_ the number or letter of this sentence.”, which can be ELI5’d as follows:

    The main object is the number or letter of this sentence, which is the number or letter signaling the sentence, which is “1”, which is a number, so it’s the number of this sentence, “1”. This is fine.

    The action being required is to “Draw a line around” the object, so, I must draw a line.

    However, a line implies a straight line, while around implies a circle (which is round), so it must be a circle.

    However, what’s around a circle isn’t called a line, it’s a circumference. And a circumference is made of infinitesimally small segments so small that they’re essentially an arc. And an arc is a segment insofar it effectively connects two points in a cartesian space with two dimensions or more… And a segment is essentially a finite range of a line, which is infinite…

    The original question asks for a line, which is infinite. However, any physical object is finite insofar it has a limited, finite area, so a line couldn’t be drawn: what can be drawn is a segment whose length is less or equal to the largest diagonal of the said physical object, which is a rectangular paper, so drawing a line would be impossible, only segments comprising a circumference.

    However, a physically-drawn segment can’t be infinitesimal insofar the thickness of the drawing tool would exceed the infinitesimality from an infinitesimal segment. It wouldn’t be a circumference, but a polygon with many sides.

    So I must draw a polygon with enough sides to closely represent a circumference, composed by the smallest possible segments, which are finite lines.

    However, the question asks for a line, and the English preposition a implies a single unit of something… but the said something can be a set (e.g. a flock, which implies many birds)… but line isn’t a set…

    However, too many howevers.

    So, if I decide to draw a circumference centered at the object (the number 1), as in circle the number, maybe it won’t be the line originally expected.

    I could draw a box instead, which would technically be around it, and would be made of lines (four lines, to be exact). But, again, a line isn’t the same as lines, let alone four lines.

    I could draw a single line, but it wouldn’t be around.

    Maybe I could reinterpret the space. I could bend the paper and glue two opposing edges of it, so any segment would behave as a line, because the drawable space is now bent and both tips of the segment would meet seamlessly.

    But the line wouldn’t be around the object, so the paper must be bent in a way that turns it into a cone whose tip is centered on the object, so a segment would become a line effectively around the object…

    However, I got no glue.

    /jk


  • @misk@sopuli.xyz @Skavau@piefed.social

    As a sidenote, I remember that UK has an odd and ancient “law” stating something in the lines “The Crown must not be offended” (i.e. being anti-monarchy and advocating for the end of monarchy, even without any violent language/means but a pacific defense of anti-monarchy). I couldn’t find it, nor I can remember the exact phrasing, but such a “law” threatens prison time for those who “dare” to “offend” the crowniness of UK Crown. Also, I’m not sure to what extent this law is applied in practice.

    Even though I’m Brazilian (so the UK supposedly “have no power over here”, and I say it with the Gandalf’s voice), I see these international situations with some worry: there are needed laws (such as laws against noise pollution) and there are laws whose reach ends up going way too far from their “seemingly well-intentioned” puritan scope (such as the aforementioned laws).

    If countries are capable of passing draconian laws against their own citizens, don’t expect that those same countries couldn’t go further to impose these laws beyond their own lawns, especially in times of interconnectedness.

    And Fediverse platforms from everywhere around the entire globe end up being caught in the crossfire, due to that same interconnectedness.

    In the end of the day, the world is increasingly bleaker, as the history is being repeated (maxims “One thing people can learn from history books is that people can’t learn from history books”, and “history doesn’t just repeat, it rhymes”).


  • @Supervisor194@lemmy.world

    Thanks (I took this as a compliment).

    However, I kind of agree with @Senal@programming.dev. Coherence is subjective (if a modern human were to interact with an individual from Sumer, both would seem “incoherent” to each other because the modern person doesn’t know Sumerian while the individual from Sumer doesn’t know the modern languages). Everyone has different ways to express themselves. Maybe this “Lewis” guy couldn’t find a better way to express what he craved to express, maybe his way of expressing himself deviates highly from the typical language. Or maybe I’m just being “philosophically generous” as someone stated in one of my replies. But as I replied to tjsauce, only who ever gazed into the same abyss can comprehend and make sense of this condition and feeling. It feels to me that this “Lewis” person gazed into the abyss. The fact that I know two human languages (Portuguese and English) as well as several abstract languages (from programming logic to metaphysical symbology) possibly helped me into “translating” it.


  • @tjsauce@lemmy.world

    You might be reading a lot into vague, highly conceptual, highly abstract language

    Definitely I’ve been into highly conceptual, highly abstract language, because I’m both a neurodivergent (possibly Geschwind) person and I’m someone who’ve been dealing with machines for more than two decades in a daily basis (I’m a former developer), so no wonder why I resonated with such a high abstraction language.

    Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called “distrust,” void of accountability on his end.

    To me, it seems more of a chicken-or-egg dilemma: what came first, the object of conclusion or the conclusion of the object?

    I’m not entering into the merit of whoever he is, because I’m aware of how he definitely fed the very monster that is now eating him, but I can’t point fingers or say much about it because I’m aware of how much I also contributed to this very situation the world is now facing when I helped developing “commercial automation systems” over the past decades, even though I was for a long time a nonconformist, someone unhappy with the direction the world was taking.

    As Nietzsche said, “One who fights with monsters should be careful lest they thereby become a monster”, but it’s hard because “if you gaze long into an abyss, the abyss will also gaze into you”. And I’ve been gazing into an abyss for as long as I can remember of myself as a human being. The senses eventually compensate for the static stimuli and the abyss gradually disappears into a blind spot as the vision tunnels, but certain things make me recall and re-perceive this abyss I’ve been long gazing into, such as the expression from other people who also have been gazing into this same abyss. Only who ever gazed into the same abyss can comprehend and make sense of this condition and feeling.


  • @Telorand@reddthat.com

    To me, personally, I read that sentence as follows:

    And if you’re recursive

    “If you’re someone who think/see things in a recursive manner” (characteristic of people who are inclined to question and deeply ponder about things, or doesn’t conform with the current state of the world)

    the non-governmental system

    a.k.a. generative models (they’re corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called “AI”)

    isolates you

    LLMs can, for example, reject that person’s CV whenever they apply for a job, or output a biased report on the person’s productivity, solely based on the shared data between “partners”. Data is definitely shared among “partners”, and this includes third-party inputting data directly or indirectly produced by such people: it’s just a matter of “connecting the dots” to make a link between a given input to another given input regarding on how they’re referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a “walking gait” and voice/intonation unique to them.

    mirrors you

    Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are “ideas” such as Meta’s on generating people (which of course will emerge from a statistical blend between existing people) to fill their “social platforms”, and there are already occurrences of “AI” being used for mimicking deceased people.

    and replaces you.

    See the previous “LLMs can reject that person’s resume”. The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some “agentic [sic] AI”.

    -—

    Maybe I’m naive to make this specific interpretation from what Lewis said, but it’s how I see and think about things.


  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this “Geoff Lewis” person is trying to say?

    Because, to me, it’s very clear: they’re referring to something that was build (the LLMs) which is segregating people, especially those who don’t conform with a dystopian world.

    Isn’t what is happening right now in the world? “Dead Internet Theory” was never been so real, online content have being sowing the seed of doubt on whether it’s AI-generated or not, users constantly need to prove they’re “not a bot” and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they’re increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we’ve becoming used to being drowned ever since.

    Now, something that may sound like a “conspiracy theory”: what’s the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn’t simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in “free beer”) to the public, out of a “nice heart”. Similarly, capital ventures and govts wouldn’t simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn’t enough to cover their costs, yet they continue to offer LLMs for cheap or “free”).

    So there’s definitely something that isn’t being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker’s performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)…

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren’t being told to us, and while we’re able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we’re already able to see, but we can’t fully comprehend its consequences.

    Curious how pondering about this is deemed “delusional”, yet it’s pretty “normal” to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.


  • @Telorand@reddthat.com @pelespirit@sh.itjust.works
    Recursion isn’t something restricted to programming: it’s a concept that can definitely occur outside technological scope.

    For example, in biology, “living beings need to breathe in order to continue breathing” (i.e. if a living being stopped breathing for enough time, it would perish so it couldn’t continue breathing) seems pretty recursive to me. Or, in physics and thermodynamics, “every cause has an effect, every effect has a cause” also seems recursive, because it negates any causeless effect so it can’t imply a starting point to the chain of causality, a causeless effect that began the causality.

    Philosophical musings also have lots of “recursion”. For example, the Cartesian famous line “Cogito ergo sum” (“I think therefore I am”) is recursive on its own: one must be in order to think, and Descartes define this very act of thinking as the fundamentum behind being, so one must also think in order to be.

    Religion also have lots of “recursion” (e.g. pray so you can continue praying; one needs karma to get karma), also society and socioeconomics (e.g. in order to have money, you need to work, but in order to work, you need to apply for a job, but in order to apply for a job, you need money (to build a CV and applying it through job platforms, to attend the interview, to “improve” yourself with specialization and courses, etc), but in order to have money, you need to work), geology (e.g. tectonic plates move and their movement emerge land (mountains and volcanoes) whose mass will lead to more tectonic movement), art (see “Mise en abyme”). All my previous examples are pretty summarized so to fit a post, so pardon me if they’re oversimplified.

    That said, a “recursive person” could be, for example, someone whose worldview is “recursive”, or someone whose actions or words recurse. I’m afraid I’m myself a “recursive person” due to my neurodivergence which leads me into thinking “recursively” about things and concepts, and this way of thinking leads back to my neurodivergence (hah, look, another recursion outside programming!)

    It’s worth mentioning how texts written by neurodivergent people (like me) are often mistaken as “word salads”. No wonder if this text I’m writing (another recursion concept outside programming: a text referring to itself) feels like “word salad” to all NT (neurotypicals) reading it.



  • @squaresinger@lemmy.world It’s probably because my kernel version is very old (5.14.11-arch1-1, haven’t updated since 2019, even though it’s Arch, a rolling-release distro) and my Acer laptop is old as well (Intel Core 7th gen), but I rarely have problems regarding laptop sleep. After I wake the laptop up, the video (including every VT#) may get frozen and I have to remotely SSH and request a reboot (and when this happens, sometimes the reboot gets stuck as well, so I have to do a hard power-off). But it’s very rare, as stated, and I daily put my laptop to sleep without issues, sometimes the system uptime stretches to weeks (currently, my system was booted almost four days ago).