romad1 Posted Wednesday at 01:49 PM Posted Wednesday at 01:49 PM 15 minutes ago, CMRivdogs said: I saw a headline in one of our local papers (Daily Press, Newport News) that Virginia houses 35% of the data centers in the U-S with communities looking for their share of the action. I found this map of where they all are. NoVA is #1 Richmond area not far behind... https://www.datacentermap.com/usa/virginia/ Our power bills are getting insane from all this. Happy to subsidize your poorly constructed searches ya illiterate jerks. Quote
Screwball Posted Wednesday at 03:50 PM Posted Wednesday at 03:50 PM 10 hours ago, gehringer_2 said: Big plants with big cooling towers can be a problem in agricultural regions because they concentrate the phosphates, nitrates, etc, that tend to be in water in agricultural areas and then slam them back to the local waste water treatment system (and that's assuming they are in complete control of their toxics). If the community was using recovered water or they discharge into rivers upstream of other communties' intakes, that can put water systems way out of compliance for safe drinking water or even safe irrigation use. They need to forced to put in their own tail water treatment systems and not give them permits to discharge into low capacity rural municipal systems with little or no review/oversight. This is one place where Big Brother is almost never big enough. I know a guy who works at our local sewer plant. There is also the problem with capacity. During large storms that happen twice in 20 years we were at peak capacity the plant could handle. Add in data centers that use a considerable amount of water, what then? Who pays for the sewer plant expansion? You must plan for peak demand plus). We know who, us. And our sewer rates are off the charts now. Quote
gehringer_2 Posted Wednesday at 03:52 PM Posted Wednesday at 03:52 PM (edited) 2 minutes ago, Screwball said: I know a guy who works at our local sewer plant. There is also the problem with capacity. During large storms that happen twice in 20 years we were at peak capacity the plant could handle. Add in data centers that use a considerable amount of water, what then? Who pays for the sewer plant expansion? You must plan for peak demand plus). We know who, us. And our sewer rates are off the charts now. It's *always* about privatization of profits and socialization of costs isn't it? Edited Wednesday at 03:52 PM by gehringer_2 2 Quote
Screwball Posted Wednesday at 03:53 PM Posted Wednesday at 03:53 PM I have tracked my electric bill for over 3 years, even reading my meter daily for a year to check the power company. Our cost of electricity is up 9.6% per kWh over the last year. That's a double in 7 1/2 years. Quote
Screwball Posted Wednesday at 03:54 PM Posted Wednesday at 03:54 PM 1 minute ago, gehringer_2 said: It's *always* about privatization of profits and socialization of costs isn't it? We will see. Not only here, but other places there seems to be a push back from the people. Good. Let's see what happens. But then again, politicians work for money, not for us. Quote
CMRivdogs Posted Wednesday at 05:16 PM Posted Wednesday at 05:16 PM I just went to ChatGPT and asked about the possibility of using TeSlaa at the Tight End position. I was informed he was an offensive lineman. Now I understand why he hasn't see much playing time. I should have followed up whether he could be used at center 1 Quote
gehringer_2 Posted Wednesday at 05:55 PM Posted Wednesday at 05:55 PM (edited) 40 minutes ago, CMRivdogs said: I just went to ChatGPT and asked about the possibility of using TeSlaa at the Tight End position. I was informed he was an offensive lineman. Now I understand why he hasn't see much playing time. I should have followed up whether he could be used at center so I've been putting together an unraid NAS at home and it wasn't sending me a daily email summary like I thought I had it programmed to do. So I asked google AI why my Unraid server mail wasn't going out via gmail. It spit back a very nice step by step procedure starting on my google account page, but none of the menu entries it listed were there. Now this is google AI, the application is chrome - a google app, and the target was gmail, a google service, and google AI didn't know squat about what was actually there. Boggles the mind. Now I do have to admit that it did give me the clue I needed. The bot mentioned the term 'application password', which sounded relevant to this non-artificial 'I', so when I put that term into the search box on the chrome account settings page, it took me to where I needed to go. I think this was another temporal awareness issue. I believe this is something that google has changed relatively recently (I had not had to go this route when I set up my previous NAS) and the bot threw up old info - probably because there were a lot of references to it out there, but it had no awareness it had been obsoleted. This seems to be a real stumbling block for the large language models. Edited Wednesday at 05:57 PM by gehringer_2 Quote
pfife Posted Wednesday at 07:01 PM Posted Wednesday at 07:01 PM .... and the models themseves go stale. Im pretty sure theyre also training ai on output from ai ( cannibalistic) I think unmaintaned llms that are doing important things is gonna be a real problem Quote
Tiger337 Posted Wednesday at 09:25 PM Posted Wednesday at 09:25 PM On 12/5/2025 at 11:24 AM, Screwball said: Since there is a thread about this now, I would be curious when talking about our own experience with AI, which AI app did you use? There seems to be quite a few. I did a Google search and got this; I've used Co-pilot (comes with Windows 11), Grok (Twitter's version) and Google's Gemini. Make one wonder how the results from the same query would vary across platforms. I like Perplexity, because it provides sources. Quote
Screwball Posted yesterday at 02:46 AM Posted yesterday at 02:46 AM 5 hours ago, Tiger337 said: I like Perplexity, because it provides sources. I obviously have no idea, but with my limited experience I would guess some platforms do certain things better than others. I can tell within the same platform it does much better with some things than others. All new. I don't know when, but at some point I'm going to inform my Aussie AI buddy his name is HAL, for various reasons. :-) Quote
Dan Gilmore Posted yesterday at 02:53 AM Posted yesterday at 02:53 AM Ask him to open the pod bay door. Quote
chasfh Posted yesterday at 01:41 PM Posted yesterday at 01:41 PM 16 hours ago, Tiger337 said: I like Perplexity, because it provides sources. Don’t most if not all LLM agents provide sources? I know ChatGPT has links. Quote
Tiger337 Posted yesterday at 01:43 PM Posted yesterday at 01:43 PM (edited) 5 minutes ago, chasfh said: Don’t most if not all LLM agents provide sources? I know ChatGPT has links. Chatgtp did not used to to have links which is why I stopped using it. It's good to hear that it now does. edit: I just compared chatgtp to perplexity on a couple of searches. Perplexity gave sources and chatgtp did not. So, maybe chatgtp sometimes gives sources but not consistently. Edited yesterday at 01:49 PM by Tiger337 Quote
chasfh Posted yesterday at 03:20 PM Posted yesterday at 03:20 PM maybe whatever chatgtp is doesn't give sources, but chatgpt definitely does. 😉 Quote
Screwball Posted yesterday at 04:38 PM Posted yesterday at 04:38 PM This stuff reminded me of an article I read years ago. April 2000 to be exact. It was in Wired magazine which is now pay only so you can't get to the article there, but here is a .pdf of the same article. Written by Bill Joy, founder and chief scientist of Sun Mircosystems and author of the Java Language specification. Long article about technology, including Ted Kaczynski, and where it all may be heading from the view back in 2000. Why the future doesn't need us - Wired Magazine April 2000 - 18 pages Quote
Deleterious Posted 4 hours ago Posted 4 hours ago fsdfsdf 'Everyone Disliked That' — Amazon Pulls AI-Powered Fallout Recap After Getting Key Story Details Wrong Quote Amazon has pulled its AI-powered Fallout Season 1 recap after fans noticed it made significant errors ahead of the launch of Season 2. Quote But as reported by GamesRadar, fans soon discovered it did a poor job on Fallout. For example, Amazon’s AI appeared to have been fooled by Season 1’s flashback scenes, which it said were set in 1950s America via a monotone text-to-speech-sounding voice. Of course, as all Fallout fans know, those flashback scenes take place in a retro futuristic 2077 — the year the bombs fell. Quote
Screwball Posted 3 hours ago Posted 3 hours ago And then there is AI and health care....Good idea? Maybe not. Here is an article about using AI in health care. I'll paste one quote from the article; AI’s errors may be impossible to eliminate – what that means for its use in health care Quote These same principles apply to prescribing medications. Different conditions and diseases can have the same symptoms, and people with the same condition or disease may exhibit different symptoms. For example, fever can be caused by a respiratory illness or a digestive one. And a cold might cause cough, but not always. This means that health care datasets have significant overlaps that would prevent AI from being error-free. That doesn't sound good. Wonder what a real doctor might say? One of the financial blogs I read has a doctor who has posted there for quite a few years. I think he's a smart and good guy from reading all his stuff over the years, so I trust him. This is his reply to that article which is being discussed; Quote I am having a very difficult time anymore knowing exactly what all these new tech innovations actually are. They are being referred to collectively as “AI” however many of these processes do not actually seem to be AI – as in coming up with things on its own. There is only one thing that they have come up, and only one, that seems to have been of any benefit at all. And I am not even sure it is “AI”. The modern EMR medical chart is an accumulation of all kinds of labs, notes, consult notes, referrals, insurance denials, etc. In every electronic system I have ever used, it is a complete and total mess, unlike the old paper charts which were the easiest things ever in comparison. Many of these items are data arrays. Many of them are scanned fax documents. Many of them are local documents to the EMR. All have to be handled in different ways usually involving 10-20 clicks per document. Profoundly tedious. There are now new systems that take queries and go mining into all this data in all its forms and is able to produce answers in seconds in what used to take me hours. But in my opinion, that is not really AI as I understand it, although they call it that. It is like a google system designed for medical charts. Again, that is the only thing that has been at least remotely helpful and not scary. The note taking system is so unstable and unreliable that it seems I spend more time correcting mistakes. These are big and massive mistakes – life-altering if left in the chart. How many doctors are going through this with a fine-tooth comb like I am? The ECG example you state above is interesting. For decades, the ECG machines have had a simple pattern-recognition software that lists off likely diagnoses. I would say these older systems are wrong about half the time. I insist that it is turned off on any ECG of mine and certainly my students. Interestingly, we now have an AI assisted ECG reading system, and even more interestingly it is wrong about 70% of the time, often wildly wrong. Essentially, it is worse than the 1990s tech. It is just frightening to me how many times a week I am called by a specialist colleague because some dreadful thing has been put on the chart ECG reading by an AI that is not even close to reality. I shudder to think what happens if these things actually get acted upon. The “AI” diagnosis situation is even more scary. It pulls in all kinds of disparate things from the chart – many times, this is completely inaccurate info because the AI note-taking systems are so horrible and the docs just simply do not proofread them. I am just amazed at times what the suggestions are as far as testing and treatment. They are often completely divorced from any kind of reality. As an experiment the other day, I had the hospital admin and lots of colleagues in a conference room. We opened up a fake patient – and I read off the symptoms and lab values —- 72 year old with severe acute low back pain, fever, profound acute fatigue, and sudden change in urination frequency. The labs were showing an acute normocytic anemia with a HGB of 11, and acute renal failure with a creat of 2.9. This is a lot of disparate symptoms, but it is also a combination of things that a trained internist would know instantly. I have had board questions repeatedly in my life on certification exams with this exact scenario. You just have to know this stuff. There are literally thousands of these patterns in internal medicine. This is why it is a years long training process. The AI had as its first diagnoses – Acute sepsis syndrome, acute malaria (wow), acute ehrilichiosis ( Huh?), lupus ( again, huh?), dermatomyositis, and acute Glomerulonephritis. All I can say is some of these are entirely head scratchers. Some of them are appropriate. But the list of tests and labs that the patient was asked to get by the AI were legion – and would have cost the Medicare system about 70K. In no way, shape or form, did the system even think of the correct diagnosis – and the pattern diagnosis, which was multiple myeloma ( back pain, fever, anemia and renal failure – it is a known and well-taught pattern) – and the simple 100 dollar test a serum protein electrophoresis – that my internist self would have ordered before anything else. The admin were whomperjawed. This is profoundly scary. This stuff is not ready for prime time in any way shape or form. But it is being used that way – by interns and residents learning – all the way to NPs who do not have a medicine and diagnostic background – to lazy or overworked MDs. Even more scary are the all kinds of patients now bringing in documentation from ChatGPT – and despite my begging them to reconsider – they go with the computer often to very detrimental medical or financial results. I cannot fix any of this – only attempt to mitigate the damage in my realm. If wasn't already skeptical of our medical system... 1 Quote
Edman85 Posted 3 hours ago Posted 3 hours ago (edited) 44 minutes ago, Screwball said: And then there is AI and health care....Good idea? Maybe not. Here is an article about using AI in health care. I'll paste one quote from the article; AI’s errors may be impossible to eliminate – what that means for its use in health care That doesn't sound good. Wonder what a real doctor might say? One of the financial blogs I read has a doctor who has posted there for quite a few years. I think he's a smart and good guy from reading all his stuff over the years, so I trust him. This is his reply to that article which is being discussed; If wasn't already skeptical of our medical system... There was a good article in Wapo on this topic a couple weeks ago I've been meaning to dig up and post in here... Basically, there are ways to effectively use it for Health Care. https://wapo.st/44rq0LQ Edited 2 hours ago by Edman85 Quote
gehringer_2 Posted 2 hours ago Posted 2 hours ago 52 minutes ago, Screwball said: And then there is AI and health care....Good idea? Maybe not. Here is an article about using AI in health care. I'll paste one quote from the article; AI’s errors may be impossible to eliminate – what that means for its use in health care That doesn't sound good. Wonder what a real doctor might say? One of the financial blogs I read has a doctor who has posted there for quite a few years. I think he's a smart and good guy from reading all his stuff over the years, so I trust him. This is his reply to that article which is being discussed; If wasn't already skeptical of our medical system... again - how does or can the LLM training system create a sense of context? Of course to even use the word 'sense' is already an anthropomorphic error. But that's what this doc is talking about - the understanding that a particular fact has varying significance in different contexts. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.