(ℹ) Join us now at the IRC channel | ䷉ Find the plain text version at this address (HTTP) or in Gemini (how to use Gemini) with a full GemText version.
schestowitz[TR3] | dhttps://lists.gnu.org/archive/html/libreplanet-discuss/2025-07/msg00015.html | Jul 18 04:24 |
---|---|---|
schestowitz[TR3] | " | Jul 18 04:24 |
schestowitz[TR3] | Jean, zero of this is responding to the article I linked which I imagine you | Jul 18 04:24 |
schestowitz[TR3] | didn't read. | Jul 18 04:24 |
schestowitz[TR3] | Yes, my summary was exaggerated and simplistic because I was just trying to | Jul 18 04:24 |
schestowitz[TR3] | make the point in a direction and emphasize the linked article. Read the | Jul 18 04:24 |
schestowitz[TR3] | article. | Jul 18 04:24 |
schestowitz[TR3] | I appreciate some of Doctorow’s concerns about AI potentially | Jul 18 04:24 |
schestowitz[TR3] | undermining human autonomy and the sustainability of AI investments, | Jul 18 04:24 |
schestowitz[TR3] | but honestly, I’m not a fan of sweeping generalizations—especially | Jul 18 04:24 |
schestowitz[TR3] | when no concrete examples are provided. From what I see in actual | Jul 18 04:24 |
schestowitz[TR3] | studies and within the developer community, the opposite is true: AI | Jul 18 04:24 |
schestowitz[TR3] | tools are actively helping people work smarter and fostering | Jul 18 04:24 |
schestowitz[TR3] | innovation. The impact of AI is complex and nuanced, and if we engage | Jul 18 04:24 |
schestowitz[TR3] | with it thoughtfully, we can overcome the challenges and really | Jul 18 04:24 |
schestowitz[TR3] | harness its benefits—without throwing the whole thing under the bus. | Jul 18 04:24 |
schestowitz[TR3] | I don't know what you mean "no concrete examples". He specifically | Jul 18 04:24 |
schestowitz[TR3] | describes cases like the single writer tasked with writing 30 "summer | Jul 18 04:24 |
schestowitz[TR3] | guides" in a short time and thus forced to rely on AI to do it. | Jul 18 04:24 |
schestowitz[TR3] | Cory never once suggested that AI isn't productive or can't be | Jul 18 04:24 |
schestowitz[TR3] | productive. He explicitly says that it *can* be, just as improvements | Jul 18 04:24 |
schestowitz[TR3] | to mechanized loom technology was productive for textile work. | Jul 18 04:24 |
schestowitz[TR3] | None of this, not me or Cory, denies anything about the productive | Jul 18 04:24 |
schestowitz[TR3] | *capacity* of AI. The issue is in how it gets actually used. Cory lays | Jul 18 04:24 |
schestowitz[TR3] | out with specific historical examples the assertion that technology is | Jul 18 04:24 |
schestowitz[TR3] | often first used to reduce the leverage that workers have — and then | Jul 18 04:24 |
schestowitz[TR3] | after some time as the technology advances, it gets more and more | Jul 18 04:24 |
schestowitz[TR3] | toward actually improved products. | Jul 18 04:24 |
schestowitz[TR3] | And neither he nor I were saying that AI is bad or should be rejected. | Jul 18 04:24 |
schestowitz[TR3] | The points are entirely about how it gets used in practice. Obviously a | Jul 18 04:24 |
schestowitz[TR3] | cooperative tech company (where the programmers are the owners) are | Jul 18 04:24 |
schestowitz[TR3] | only going to use AI in productive ways, i.e. the "centaur" approach. | Jul 18 04:24 |
schestowitz[TR3] | It's the exploitive companies that do the unhealthy reverse-centaur | Jul 18 04:24 |
schestowitz[TR3] | approach. | Jul 18 04:24 |
schestowitz[TR3] | And the reason I brought this up was in response to concerns about how | Jul 18 04:24 |
schestowitz[TR3] | AI is getting pushed into things and funded so extremely. The reason | Jul 18 04:24 |
schestowitz[TR3] | for that is because of the leverage and profit it can bring to the | Jul 18 04:24 |
schestowitz[TR3] | investors and owners of big corporations. Yes, at the same time there | Jul 18 04:24 |
schestowitz[TR3] | are productive things happening in AI that are just interesting, but | Jul 18 04:24 |
schestowitz[TR3] | that on its own would not be seeing the level of investment and energy | Jul 18 04:24 |
schestowitz[TR3] | use that we see now. | Jul 18 04:24 |
schestowitz[TR3] | If workers themselves find ways to use a technology to enhance their | Jul 18 04:24 |
schestowitz[TR3] | work, that's great. And if their work goes better and they get to do | Jul 18 04:24 |
schestowitz[TR3] | less of the tedious stuff and more of the most meaningful work, that's | Jul 18 04:24 |
schestowitz[TR3] | superb. That's the centaur scenario. Again, nobody is denying that | Jul 18 04:24 |
schestowitz[TR3] | scenario. The question is how AI gets used. It's not inevitably healthy | Jul 18 04:24 |
schestowitz[TR3] | or unhealthy. But if you believe that the investors and bosses in our | Jul 18 04:24 |
schestowitz[TR3] | corporate system are generally trustworthy to make choices that are | Jul 18 04:24 |
schestowitz[TR3] | healthy for the world, then you and I have a very different view of | Jul 18 04:24 |
schestowitz[TR3] | things. | Jul 18 04:24 |
schestowitz[TR3] | I'd be willing to discuss the actual points Cory is making if you | Jul 18 04:24 |
schestowitz[TR3] | actually want to deal with them. | Jul 18 04:24 |
schestowitz[TR3] | References" | Jul 18 04:24 |
schestowitz[TR3] | https://lists.gnu.org/archive/html/libreplanet-discuss/2025-07/msg00014.html | Jul 18 04:28 |
-TechBytesBot/#techbytes-lists.gnu.org | Re: Is AI-generated code changing free software? | Jul 18 04:28 | |
schestowitz[TR3] | "" | Jul 18 04:28 |
schestowitz[TR3] | What you call "AI" is just new technology powered with knowledge | Jul 18 04:28 |
schestowitz[TR3] | that gives us good outcomes, it is new computing age, and not | Jul 18 04:28 |
schestowitz[TR3] | "intelligent" by any means. It is just computer and software. So | Jul 18 04:28 |
schestowitz[TR3] | let's not give it too much of the importance. | Jul 18 04:28 |
schestowitz[TR3] | There is no knowledge involved, just statistical probabilities in | Jul 18 04:28 |
schestowitz[TR3] | those "plausible sentence generators" or "stochastical parrots". | Jul 18 04:28 |
schestowitz[TR3] | Come on — "no knowledge involved" is quite the claim. Sure, LLMs | Jul 18 04:28 |
schestowitz[TR3] | don’t *understand* like humans do, but dismissing them as just | Jul 18 04:28 |
schestowitz[TR3] | “stochastic parrots” ignores what they’re actually doing: | Jul 18 04:28 |
schestowitz[TR3] | [snip] | Jul 18 04:28 |
schestowitz[TR3] | Way to intentionally ignore how LLMs work. Knowledge is an awareness of facts in a particular context. LLMs are not aware on any level and only have a statistical relation to context. Strings are not facts. The phrases "Plausible Sentence Generators" and "Stochastic Parrots" hit the nail on the head, though maybe the latter does not give real parrots their due. | Jul 18 04:28 |
schestowitz[TR3] | LLMs are probability engines and the output is just a random, but grammatically correct, walk through through a pile of words. There are no facts involved in the output. There may have been some facts used for the training input but that is largely uncontrolled. LLMs can shorten, to a limited extent. They can mix and match pieces. But they can't summarize. The grammatical correctness fools a lot of people and thus the problem here | Jul 18 04:28 |
schestowitz[TR3] | is that people interpret LLM output as facts through wishful thinking. | Jul 18 04:28 |
schestowitz[TR3] | That is more like pareidolia than reality. | Jul 18 04:28 |
schestowitz[TR3] | We saw this before with Eliza. It's just on a larger scale now, wasting more electricity and money at previously unimaginable levels, all for no productive results. | Jul 18 04:28 |
schestowitz[TR3] | Thus we see daily the catastrophic failure of these systems in regards to factual output. | Jul 18 04:28 |
schestowitz[TR3] | Sure, LLMs can and do make mistakes, especially with facts | Jul 18 04:28 |
schestowitz[TR3] | sometimes, but dismissing the whole technology because of that | Jul 18 04:28 |
schestowitz[TR3] | overlooks how often they get things right and actually boost | Jul 18 04:28 |
schestowitz[TR3] | productivity. Like any tool, they have limits, but calling it a | Jul 18 04:28 |
schestowitz[TR3] | “catastrophic failure” across the board doesn’t do justice to the | Jul 18 04:28 |
schestowitz[TR3] | real-world benefits many of us see every day. | Jul 18 04:28 |
schestowitz[TR3] | Please name any of these benefits outside the generation and promulgation of disinformation and propaganda at scale. Right now, in the context of coding, the LLMs reduce programmer efficiency while giving the illusion of speed. | Jul 18 04:28 |
schestowitz[TR3] | More money just make them more expensive. | Jul 18 04:28 |
schestowitz[TR3] | Sounds like you might be a bit frustrated about the price side of things — | Jul 18 04:28 |
schestowitz[TR3] | [snip] | Jul 18 04:28 |
schestowitz[TR3] | It is the models themselves which are inherently broken not the scale at which they are run. Scaling the financial investments and electricity used does not improve or even change the underlying model. This LLM investment bubble reminds me a bit of the cryptocurrency bubble, specifically, Bitcoin and derivatives. The Bitcoin experiment ended in 2009 when Satoshi basically disposed of some very interesting research papers in the pro | Jul 18 04:28 |
schestowitz[TR3] | verbial trash where they were fished out by scammers. Yet people have and will continue to bet on it like a digital cockroach races or football pools. | Jul 18 04:28 |
schestowitz[TR3] | The core of the matter is that LLMs produce no useful output, and thus no savings of time or effort. In that they are inherently a waste of electricity. They also take developer time and money away from real projects which could produce actual results. Other machine learning might someday produce results. LLMs cannot, they are over and wishful thinking can inflate the bubble further but without producing anything of value. | Jul 18 04:28 |
schestowitz[TR3] | On 7/17/25 15:22, Jean Louis wrote: | Jul 18 04:28 |
schestowitz[TR3] | [snip] | Jul 18 04:28 |
schestowitz[TR3] | I don't find it profound, sorry. We are in changing world, tomorrow shortly it will be very different. | Jul 18 04:28 |
schestowitz[TR3] | And much for the worse, following the dead end technologies encompassed | Jul 18 04:28 |
schestowitz[TR3] | by LLMs. As LLM slop is foisted onto the WWW in place of knowledge and real content, it now gets ingested and processed by other LLMs, creating a sort of ouroboros of crap. | Jul 18 04:28 |
schestowitz[TR3] | As mentioned in the other message, those that are fooled into trying to incorporate the code which the LLMs have plagiarized on their behalf are losing the connection to the upstream projects. Not everyone using FOSS code can or will contribute to the project, but by having LLMs cut the connection, there is not even the chance for them to grow into a contributing role. | Jul 18 04:28 |
schestowitz[TR3] | Again, name one LLM which carries the licensing and attribution forward." | Jul 18 04:28 |
schestowitz[TR3] | https://www.thelayoff.com/t/1k07367es | Jul 18 06:34 |
-TechBytesBot/#techbytes- ( status 403 @ https://www.thelayoff.com/t/1k07367es ) | Jul 18 06:34 | |
schestowitz[TR3] | "" | Jul 18 06:34 |
schestowitz[TR3] | you make good points. | Jul 18 06:34 |
schestowitz[TR3] | 2 hours ago by Anonymous | Jul 18 06:34 |
schestowitz[TR3] | | no reactions | Reply | Jul 18 06:34 |
schestowitz[TR3] | Post ID: @nn+1k07367es | Jul 18 06:34 |
schestowitz[TR3] | 0 | Jul 18 06:34 |
schestowitz[TR3] | @jv Well, they would've been given stock this entire time annually, and tons of it would've vested, so provided they didn't sell it over the course of these 19 years they'd be sitting pretty. And if they didn't take advantage of the ESPP this entire time then that's on them. Also the 401K match is phenomenal, so if they weren't taking advantage of that as well that's on them. Even if they aren't above a 66. Even at 60-65 you are st | Jul 18 06:34 |
schestowitz[TR3] | ill getting the same ESPP and same 401K match, and still getting stock awarded to you. Only real difference is how much stock annually - that's it. | Jul 18 06:34 |
schestowitz[TR3] | 15 hours ago by Anonymous | Jul 18 06:34 |
schestowitz[TR3] | | no reactions | Reply | Jul 18 06:34 |
schestowitz[TR3] | Post ID: @k7+1k07367es | Jul 18 06:34 |
schestowitz[TR3] | 0 | Jul 18 06:34 |
schestowitz[TR3] | @fk: true for 66+, but certainly not for under 66 and it differs from one employee to another. | Jul 18 06:34 |
schestowitz[TR3] | " | Jul 18 06:34 |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has left #techbytes | Jul 18 07:12 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has joined #techbytes | Jul 18 07:32 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has left #techbytes | Jul 18 07:42 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has joined #techbytes | Jul 18 07:59 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has left #techbytes | Jul 18 11:44 | |
*GNUmoon2 has quit (connection closed) | Jul 18 12:18 | |
*GNUmoon2 (~GNUmoon@u22aa4wq4i3zn.irc) has joined #techbytes | Jul 18 12:18 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has joined #techbytes | Jul 18 15:14 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has left #techbytes | Jul 18 15:35 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has joined #techbytes | Jul 18 15:38 | |
schestowitz[TR3] | > Tiny human! Have you noticed how he posts*only* the articles that I post? I just wrote about Calibre 8.7 (which was out for a few hours now), let's see how fast Fatioli is at stealing... | Jul 18 15:59 |
schestowitz[TR3] | Come on! | Jul 18 15:59 |
schestowitz[TR3] | Don't chu know! | Jul 18 15:59 |
schestowitz[TR3] | You are a TRAINING SET. | Jul 18 15:59 |
schestowitz[TR3] | He's the "no-nonsense' man! ;-) | Jul 18 15:59 |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has left #techbytes | Jul 18 17:13 | |
*psydruid (~psydruid@jevhxkzmtrbww.irc) has joined #techbytes | Jul 18 17:18 | |
*purring_jade_gibbon (~purring_j@freenode-0g7.gp4.96e19s.IP) has joined #techbytes | Jul 18 19:07 | |
purring_jade_gibbon | You can reach me on +44-7599248843 or maddyczura@gmail.com or madeleine.czura@arcadis.com anytime! | Jul 18 19:08 |
purring_jade_gibbon | Linkedin: https://uk.linkedin.com/in/maddy-czura | Jul 18 19:08 |
purring_jade_gibbon | Instagram: https://www.instagram.com/maddy_czura/ | Jul 18 19:08 |
purring_jade_gibbon | Business Address: Top Floor, 164 Plashet Road, London, United Kingdom, E13 0QT | Jul 18 19:08 |
purring_jade_gibbon | Home Address: Comrie, 8 Southampton Road, Fareham, Hampshire, United Kingdom, PO16 7DY | Jul 18 19:08 |
*purring_jade_gibbon (~purring_j@freenode-0g7.gp4.96e19s.IP) has left #techbytes | Jul 18 19:08 | |
schestowitz[TR3] | <li> | Jul 18 19:47 |
schestowitz[TR3] | <h5><a href="https://hongkongfp.com/2025/07/17/huawei-reclaims-top-spot-in-chinas-phone-market-data-shows/">Huawei reclaims top spot in China's phone market, data shows</a></h5> | Jul 18 19:47 |
schestowitz[TR3] | <blockquote> | Jul 18 19:47 |
schestowitz[TR3] | <p>Tech giant Huawei topped China’s smartphone market for the first time in over four years, outflanking US tech giant Apple as well as local competitors including Xiaomi, according to the US-based International Data Corporation. </p> | Jul 18 19:47 |
schestowitz[TR3] | </blockquote> | Jul 18 19:47 |
schestowitz[TR3] | </li> | Jul 18 19:47 |
-TechBytesBot/#techbytes-hongkongfp.com | Huawei reclaims top spot in China's phone market, data shows | Jul 18 19:47 | |
schestowitz[TR3] | https://lists.gnu.org/archive/html/libreplanet-discuss/2025-07/msg00016.html | Jul 18 22:23 |
-TechBytesBot/#techbytes-lists.gnu.org | Re: Is AI-generated code changing free software? | Jul 18 22:23 | |
schestowitz[TR3] | "I want to respond to one particular point: | Jul 18 22:23 |
schestowitz[TR3] | On 7/16/25 04:30, Jean Louis wrote: | Jul 18 22:23 |
schestowitz[TR3] | [...] | Jul 18 22:23 |
schestowitz[TR3] | There will be a day when AI is actually productively helpful, but | Jul 18 22:23 |
schestowitz[TR3] | that's not today for most things. | Jul 18 22:23 |
schestowitz[TR3] | Well maybe not for you, I respect the opinion, though many of people I | Jul 18 22:23 |
schestowitz[TR3] | know using Large Language Models (LLM) have got tremendous assistance, | Jul 18 22:23 |
schestowitz[TR3] | that they couldn't complete themselves otherwise. It would need too | Jul 18 22:23 |
schestowitz[TR3] | large number of people, and for individual on university, it wouldn't | Jul 18 22:23 |
schestowitz[TR3] | be even possible making those projects. | Jul 18 22:23 |
schestowitz[TR3] | [...] | Jul 18 22:23 |
schestowitz[TR3] | I have some serious difficulties using a computer due to some neurological issues. I can type at about 0.5 - 1.0 key-stroke/second. Using a mouse is even more problematic. Clicking a button can take anywhere from 5sec to 1min or more. I can, however, talk/speak okay. I had thought, therefore, that using today's AI/LLM technology would allow me to use a computer much more efficiently by simply telling it what I wanted to do. No | Jul 18 22:23 |
schestowitz[TR3] | such luck. | Jul 18 22:23 |
schestowitz[TR3] | First, the only reasonably-Free voice control software I could find was the Numen project (see https://sr.ht/~geb/numen/ ). After building it from source (there is no pre-built package for the GNU/Linux distributions I use) and modifying the configuration files to suit my taste, I ran some tests to compare it against my "normal" computer use. The results were seriously underwhelming. It was no faster than me just using the compute | Jul 18 22:23 |
schestowitz[TR3] | r unaided. The problem seems to be that Numen makes so many mistakes that correcting them negates any gains from the times when it gets everything right. | Jul 18 22:23 |
schestowitz[TR3] | Granted, my evidence is purely anecdotal. Also, there are clearly people for which even this level of functionality would be a big plus. Still. If AI/LLM can't provide _me_ any productivity gains in performing such elementary tasks, why should I believe it would do any better at much more complicated tasks, and/or for people without my disabilities? | Jul 18 22:23 |
schestowitz[TR3] | Just my $0.02. | Jul 18 22:23 |
schestowitz[TR3] | " | Jul 18 22:23 |
-TechBytesBot/#techbytes- ( status 418 @ https://sr.ht/~geb/numen/ ) | Jul 18 22:24 | |
schestowitz[TR3] | https://lists.gnu.org/archive/html/libreplanet-discuss/2025-07/msg00017.html | Jul 18 22:24 |
-TechBytesBot/#techbytes-lists.gnu.org | Re: Is AI-generated code changing free software? | Jul 18 22:24 | |
schestowitz[TR3] | "You're right that "Sharing and contributing are different things" — and | Jul 18 22:24 |
schestowitz[TR3] | this exchange highlights a core tension in discussions about Free | Jul 18 22:24 |
schestowitz[TR3] | Software and economics. Let’s walk through the points you raise and | Jul 18 22:24 |
schestowitz[TR3] | respond directly in plain text: | Jul 18 22:24 |
schestowitz[TR3] | "Why would you protest in a family-run company? Either you are with | Jul 18 22:24 |
schestowitz[TR3] | them, or not with them. Simple." | Jul 18 22:24 |
schestowitz[TR3] | This might be pragmatic, but it oversimplifies the reality many workers | Jul 18 22:24 |
schestowitz[TR3] | face. When someone works at a company for nearly a decade, as I did, | Jul 18 22:24 |
schestowitz[TR3] | and contributes deeply, they may feel a moral stake in the company, | Jul 18 22:24 |
schestowitz[TR3] | even if they’re not family. Protesting unfairness isn’t necessarily | Jul 18 22:24 |
schestowitz[TR3] | disloyalty — it can also be a call to improve the workplace. | Jul 18 22:24 |
schestowitz[TR3] | "Good thing, living in beautiful Norway, with all of the social | Jul 18 22:24 |
schestowitz[TR3] | services, can't put you really in bad situation." | Jul 18 22:24 |
schestowitz[TR3] | Norway has a safety net, yes. But dignity, identity, and continuity in | Jul 18 22:24 |
schestowitz[TR3] | professional life are also human needs — not just having food on the | Jul 18 22:24 |
schestowitz[TR3] | table. Losing a job unjustly still hurts, especially when it’s about | Jul 18 22:24 |
schestowitz[TR3] | power, not performance. | Jul 18 22:24 |
schestowitz[TR3] | "Payments in Free Software are voluntarily." | Jul 18 22:24 |
schestowitz[TR3] | "Yes and no. If you put yourself in position to receive some | Jul 18 22:24 |
schestowitz[TR3] | funds... fine... or seek a position where you can get paid..." | Jul 18 22:24 |
schestowitz[TR3] | That’s a fair nuance. Many people do find paid roles in Free Software — | Jul 18 22:24 |
schestowitz[TR3] | at Red Hat, Mozilla, or through grants — but the landscape remains | Jul 18 22:24 |
schestowitz[TR3] | precarious. The point was that in Free Software, power isn't | Jul 18 22:24 |
schestowitz[TR3] | centralized in families or bosses in the same way. | Jul 18 22:24 |
schestowitz[TR3] | "Instead of paying family-run companies... pay to individual | Jul 18 22:24 |
schestowitz[TR3] | workers..." | Jul 18 22:24 |
schestowitz[TR3] | "Of course, I agree with that..." | Jul 18 22:24 |
schestowitz[TR3] | That’s encouraging. The idea is not that everyone must | Jul 18 22:24 |
schestowitz[TR3] | share all income, but that Free Software can become more resilient if | Jul 18 22:24 |
schestowitz[TR3] | people who benefit from it financially support those doing the work — | Jul 18 22:24 |
schestowitz[TR3] | through documentation, translations, or bug fixes. | Jul 18 22:24 |
schestowitz[TR3] | "‘Share’ income? Why don't you share your income. That is why it is | Jul 18 22:24 |
schestowitz[TR3] | called ‘your’, because it is not for the community." | Jul 18 22:24 |
schestowitz[TR3] | I do. I’ve paid bonuses to contributors. But I also agree with your | Jul 18 22:24 |
schestowitz[TR3] | skepticism: nobody should be forced to share. Still, advocating for a | Jul 18 22:24 |
schestowitz[TR3] | culture of reciprocity in Free Software isn’t about guilt — it’s about | Jul 18 22:24 |
schestowitz[TR3] | sustainability. | Jul 18 22:24 |
schestowitz[TR3] | "Rather will pay the scholarship to teenagers in Uganda." | Jul 18 22:24 |
schestowitz[TR3] | That’s noble. Helping those with fewer opportunities is valuable. The | Jul 18 22:24 |
schestowitz[TR3] | two aren’t mutually exclusive: you can support global education and | Jul 18 22:24 |
schestowitz[TR3] | also recognize when fellow developers contribute to the shared digital | Jul 18 22:24 |
schestowitz[TR3] | infrastructure we all rely on. | Jul 18 22:24 |
schestowitz[TR3] | "Offer services and products of good value to other people and you | Jul 18 22:24 |
schestowitz[TR3] | will get money." | Jul 18 22:24 |
schestowitz[TR3] | True in principle. But some Free Software contributors don’t want to | Jul 18 22:24 |
schestowitz[TR3] | sell, brand, or commercialize — they just want to build something good. | Jul 18 22:24 |
schestowitz[TR3] | There should be space for both. | Jul 18 22:24 |
schestowitz[TR3] | Ultimately, your response shows a constructive realism: we can't expect | Jul 18 22:24 |
schestowitz[TR3] | income just for being virtuous; we have to create value. But we can | Jul 18 22:24 |
schestowitz[TR3] | also make room for generosity, especially when the software we're | Jul 18 22:24 |
schestowitz[TR3] | building serves everyone — including those who can’t pay. | Jul 18 22:24 |
schestowitz[TR3] | – Ole Kristian Aamot" | Jul 18 22:24 |
schestowitz[TR3] | https://lists.gnu.org/archive/html/libreplanet-discuss/2025-07/msg00020.html | Jul 18 22:25 |
schestowitz[TR3] | " I've been doing applied research on auditing black-box algorithms and | Jul 18 22:25 |
schestowitz[TR3] | AI systems with a team at Princeton and MIT and CU Boulder, backed by | Jul 18 22:25 |
schestowitz[TR3] | Mozilla (say what you will about them and their 2024 layoffs, | Jul 18 22:25 |
schestowitz[TR3] | restructuring, and turn towards AI -- and in general). One of my | Jul 18 22:25 |
schestowitz[TR3] | teammates, Samantha Dalal (CU Boulder), interviewed Sayash Kapoor | Jul 18 22:25 |
schestowitz[TR3] | (Princeton) for a podcast about his popular writing in AISnakeOil.com. | Jul 18 22:25 |
schestowitz[TR3] | You might enjoy listening - | Jul 18 22:25 |
schestowitz[TR3] | [1]https://kgnu.org/looks-like-new-what-is-artificial-intelligence-capa | Jul 18 22:25 |
schestowitz[TR3] | ble-of/ | Jul 18 22:25 |
schestowitz[TR3] | And to get at the heart of the question here about AI and code, Sayash | Jul 18 22:25 |
schestowitz[TR3] | cites a presentation (see slides at | Jul 18 22:25 |
schestowitz[TR3] | [2]https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf) | Jul 18 22:25 |
schestowitz[TR3] | from Nov. 2019 (almost six years ago!), three categories of use for AI | Jul 18 22:25 |
schestowitz[TR3] | their relative accuracy: | Jul 18 22:25 |
schestowitz[TR3] | 1. Perception, e.g. image or music recognition like Shazam — genuine | Jul 18 22:25 |
schestowitz[TR3] | progress | Jul 18 22:25 |
schestowitz[TR3] | 2. Automating judgement, e.g. spam filtering — imperfect, but improving | Jul 18 22:25 |
schestowitz[TR3] | 3. Predicting social outcomes, e.g. prison recidivism — fundamentally | Jul 18 22:25 |
schestowitz[TR3] | dubious, "no matter how much data you throw at it" | Jul 18 22:25 |
schestowitz[TR3] | The presentation concluded with the following takeaway (with my | Jul 18 22:25 |
schestowitz[TR3] | comments): | Jul 18 22:25 |
schestowitz[TR3] | * AI excels at some tasks, but can't predict social outcomes | Jul 18 22:25 |
schestowitz[TR3] | (especially for racial and economic justice, let alone technical skill | Jul 18 22:25 |
schestowitz[TR3] | and ability in the form of writing and reviewing code). | Jul 18 22:25 |
schestowitz[TR3] | * We must resist the enormous commercial interests that aim to | Jul 18 22:25 |
schestowitz[TR3] | obfuscate this fact (hence, FLOSS - but more, of course, as evidence by | Jul 18 22:25 |
schestowitz[TR3] | this list). | Jul 18 22:25 |
schestowitz[TR3] | * In most cases, manual scoring rules are just as accurate, far more | Jul 18 22:25 |
schestowitz[TR3] | transparent, and worth considering (yay for the enduring human spirit | Jul 18 22:25 |
schestowitz[TR3] | and dignified work). | Jul 18 22:25 |
schestowitz[TR3] | I hope this helps! | Jul 18 22:25 |
schestowitz[TR3] | FLOSS is undead, long live FLOSS!" | Jul 18 22:25 |
-TechBytesBot/#techbytes-lists.gnu.org | Re: Is AI-generated code changing free software? | Jul 18 22:26 | |
-TechBytesBot/#techbytes-kgnu.org | Looks Like New: What is artificial intelligence capable of? – KGNU Community Radio | Jul 18 22:26 | |
schestowitz[TR3] | It's unhelpful to limit AI discussions to the most basic understanding | Jul 18 22:30 |
schestowitz[TR3] | of LLMs. There are AI models that use reasoning rather than simply LLM | Jul 18 22:30 |
schestowitz[TR3] | approaches. LLMs have limitations, and AI development is already | Jul 18 22:30 |
schestowitz[TR3] | passing those by going beyond the approach of LLMs. | Jul 18 22:30 |
schestowitz[TR3] | It's unhelpful to limit AI discussions to the most basic understanding | Jul 18 22:30 |
schestowitz[TR3] | of LLMs. There are AI models that use reasoning rather than simply LLM | Jul 18 22:30 |
schestowitz[TR3] | approaches. LLMs have limitations, and AI development is already | Jul 18 22:30 |
schestowitz[TR3] | passing those by going beyond the approach of LLMs. | Jul 18 22:30 |
schestowitz[TR3] | https://lists.gnu.org/archive/html/libreplanet-discuss/2025-07/msg00021.html | Jul 18 22:30 |
-TechBytesBot/#techbytes-lists.gnu.org | Re: Is AI-generated code changing free software? | Jul 18 22:30 | |
schestowitz[TR3] | "" | Jul 18 22:30 |
schestowitz[TR3] | It's unhelpful to limit AI discussions to the most basic understanding | Jul 18 22:30 |
schestowitz[TR3] | of LLMs. There are AI models that use reasoning rather than simply LLM | Jul 18 22:30 |
schestowitz[TR3] | approaches. LLMs have limitations, and AI development is already | Jul 18 22:30 |
schestowitz[TR3] | passing those by going beyond the approach of LLMs. | Jul 18 22:30 |
schestowitz[TR3] | Even way back with Alpha Go, it famously made moves that were | Jul 18 22:30 |
schestowitz[TR3] | *creative* in the sense that they were not moves a human would make nor | Jul 18 22:30 |
schestowitz[TR3] | that would be predictable moves within training data. And now, new | Jul 18 22:30 |
schestowitz[TR3] | approaches to reasoning-based AIs are already being used. | Jul 18 22:30 |
schestowitz[TR3] | Humans do predictable patterns too. Given damage to short-term memory, | Jul 18 22:30 |
schestowitz[TR3] | people get into loops. Part of the mistake in minimizing AI | Jul 18 22:30 |
schestowitz[TR3] | significance comes from limited view of AI, but part of it comes from | Jul 18 22:30 |
schestowitz[TR3] | putting too much distinction in terms of how we see humans. | Jul 18 22:30 |
schestowitz[TR3] | AIs today are neural-networks, and even though they are a different | Jul 18 22:30 |
schestowitz[TR3] | sort of neural net than our brains, the comparison holds up pretty well | Jul 18 22:30 |
schestowitz[TR3] | in lots of ways. There's nothing utterly specially magical about how | Jul 18 22:30 |
schestowitz[TR3] | I'm typing this. My brain has a whole pattern of putting language | Jul 18 22:30 |
schestowitz[TR3] | together and responding to inputs. | Jul 18 22:30 |
schestowitz[TR3] | What we can say about AI is that it is *unlike* humans, it isn't human. | Jul 18 22:30 |
schestowitz[TR3] | But we won't be right in saying that it is anything like a simplistic | Jul 18 22:30 |
schestowitz[TR3] | word-by-word prediction algorithm. That is just one aspect of many | Jul 18 22:30 |
schestowitz[TR3] | features AIs can have today. And unlike Eliza, there's no simple | Jul 18 22:30 |
schestowitz[TR3] | program, there's an *evolved* neural net that we can't really inspect | Jul 18 22:30 |
schestowitz[TR3] | in human-programming terms. We *raise* AI, *parent* it, *grow* it, | Jul 18 22:30 |
schestowitz[TR3] | rather than program it. Just don't take this to mean that it's alive or | Jul 18 22:30 |
schestowitz[TR3] | conscious, we have no reason to say that. But it's not like other | Jul 18 22:30 |
schestowitz[TR3] | programs, it's categorically different. | Jul 18 22:30 |
schestowitz[TR3] | Aaron"" | Jul 18 22:30 |
Generated by irclog2html.py
2.6 | ䷉ find the plain text version at this address (HTTP) or in Gemini (how to use Gemini) with a full GemText version.