00:48:03 Ron BC: New User Linux is actually pretty good. (I say as a not-new Linux user.) All the St Louis groups are pretty darned good: kudos everyone. 00:48:27 Sean T.: Reacted to "New User Linux is ac..." with ❤️ 00:49:21 JM: New User Linux is pretty cool. Everyone, including Stan, were very helpful and compassionate about my questions. 00:52:19 Ron BC: Is that calendar an embedded Google calendar? 00:53:08 Sean T.: I think it is a google calendar that is embedded in the site 00:56:39 JM: Sony has a full audio platform that works on Android to provide a walkman-sized device that handles the audio hardware. 00:57:28 Ron BC: As I understand it, Android audio has a lot of lag in it. I wonder how well the Sony product works and how it deals with that? 00:57:56 Phil B: FIRST = For Inspiration and Recognigion of Science and Technology 01:05:12 Ron BC: KDE's indexing service (baloo) often gets turned off - it's considered one of the worst features. When it is running and has built an index, searching is pretty quick. 01:07:02 Randy van heusden: https://www.voidtools.com/ 01:07:06 Ron BC: systemd-timers was a great Sean Presentation™ 01:07:14 Sean T.: Reacted to "systemd-timers was a..." with ❤️ 01:08:12 Phil B: https://alternativeto.net/software/everything/ 01:08:25 JM: Reacted to "KDE's indexing ser..." with 👌 01:08:55 JM: 👏 01:09:28 JM: Replying to "systemd-timers was..." Is it possible to link it here? 01:11:45 JM: FediLab on Matodon network is cool; I'd add you guys if I saw you there. 01:15:05 JM: Suggestion based on what Mr. Reichardt said: If you post the list of descriptions to the L.U.G. page, at least then someone could go look up the topic using the date in the TXT file. 01:17:30 Randy van heusden: Replying to "systemd-timers was a..." https://www.voidtools.com/ 01:20:30 JM: Replying to "systemd-timers was..." Thank you. I meant the systemd-timers presentation log(s). 01:20:56 pranab: Lattitude (trademark pending) 01:22:17 JM: I am interested in using these/any tools offline and without having to sign license agreements/agreements that require contact information (litigable). Can JuPyter and/or Ollama fulfil those of my directives? 01:22:34 Sean T.: Yes 01:22:37 Sean T.: Totally 01:22:58 JM: Reacted to "Totally" with 👍 01:26:55 JM: 👏 01:27:26 Billy Mandy: Hello 01:27:44 JM: Reacted to "Hello" with 👋 01:30:19 tony.c: and now for the weather ... 01:38:10 JM: That's an emergent property of giant amounts of training data (the large data sets). 01:39:06 Randy van heusden: the weather: https://www.wttr.in/63101 01:39:22 tony.c: NO 01:40:23 JM: The "personality" can be augmented by the formatted prompt, a.k.a. examples given. These sometimes come in the form of back and forth example conversations between server and user. 01:40:28 tony.c: https://www.sluug.org/pipermail/announce/2021-February/000760.html 01:40:49 Sean T.: Reacted to "https://www.sluug.or..." with 👍 01:41:13 JM: Reacted to "the weather: http..." with 🤩 01:41:34 JM: Replying to "the weather: http..." Beautiful! 01:43:12 Billy Mandy: exactly, its a bit long 01:43:22 Billy Mandy: users dont want to read 01:43:33 JM: "Keep the titles short, trying to make each word represent the meat of each technology." 01:44:43 Ron Bc: @JM - I don't know where there'd be a link to the systemd-timers presentation. Maybe Stan knows? sluug.org? 01:44:47 Randy van heusden: Stan's point is very well taken and it is true that most people do not read paragraphs. 01:49:24 JM: 👏 01:49:34 pranab: 👏 01:52:19 JM: You can give it an example of the format you want to see for the format of the output you want. 01:53:29 JM: ComfyUI can take the prompts that you generate from one LLM and feed them after being parsed, say, and then into another LLM such as StableDiffusion and at the same time to different calls to the text-generation LLM, one for type1 and type2 requests. 01:53:56 JM: You just save the net of boxes to a file, and it reloads that next time you open it, just hit play and type prompts, pretty much. 01:55:25 pranab: @Sean T. how does on determine the minimum system requirements to run LLMs locally? I went to ollama website and there was no information. 01:57:35 pranab: Thank you. 01:58:59 pranab: Thank you. 02:00:02 JM: w007 @ Ollama being easy; I look forward to trying it! 02:00:15 pranab: Yeah, that clarifies a lot. I thought you had brand new desktop. 02:00:20 pranab: :-) 02:00:22 JM: Do you think it will run on Ubuntu 18.04 LTS? 02:03:06 Tyler R: me with a p20 and p0 and mutiple Xavior modules.... 02:04:57 JM: Replying to "@Sean T. how does ..." The model itself determines most of the hardware requirements. Typical models are in the GigaBytes, and many models are too big for a single GPU commonplace system. A now-"medium" sized LLM of 8GB would need something like a 12GB VRAM NVIDIA card (or sometimes capable to use BLAAAST library on other systems, dunno details there), and should probably have about 16GB of regular RAM. Sometimes the AGE of the Linux actually matters, because the Python libraries for Torch are keyed to the systems for which they were compiled to be distributed for. Maybe someday (not likely) there will be backfill from independent heroes, but most people are abandoning older platforms of Linux (or so it seems to me). 02:06:14 JM: Replying to "@Sean T. how does ..." However, a 'medium' size model like I've described will work 'better' /reasonably on a GPU acceleration mode, even on a single GPU. 02:07:26 JM: @ self-aware, we tell it to behave itself or no more data. 02:07:47 JM: (and make sure there's no agent-style accesses to the Internet, and air-gap it.) 02:08:15 pranab: Replying to "@Sean T. how does on..." Thank you 02:09:16 Billy Mandy: would narrowing the question down help the situation? 02:16:01 Matt Matti: Gotta run. Very interesting and great presentation. 02:25:08 pranab: Even LLMs don’t like dates. lol 02:26:59 pranab: This is amazing 02:27:04 Randy van heusden: What if you take the date out and left in the month and year? 02:29:00 Sean T.: https://learn.microsoft.com/en-us/azure/developer/python/get-started-app-chat-template?tabs=github-codespaces 02:29:36 Robert’s iPhone: Sean, nice talk! Is your material available on line, e.g. GitHub? 02:36:16 Sean T.: https://github.com/seantwie03/langchain-sluug-demo 02:40:06 JM: Did you do data conditioning to the unsuccessful data dump you gave it? For instance: prompt="Considering the following as input: { data dump from website } question: this and that?" 02:40:09 JM: Or did I miss that part? 02:41:33 JM: So the RAG method allows the LLM to get the files retrieved instead of feeding them into the tokens? 02:41:43 pranab: Thank you! 02:41:52 Donald Coupe: Thank you 🙏 02:41:56 Randy van heusden: THANK YOU for putting it all together 02:42:02 JM: Thank you. Heavy presentation. Tons to learn. 02:42:07 Billy Mandy: 👏 02:42:56 JM: I've never used JuPyter ; does it often give errors that you have to reload the whole program? I mean, you can run commands from the thing, that's pretty darn cool. 02:43:47 Jamal Ahmed: I'm from meetup lol 02:44:02 Jamal Ahmed: ouch lol 02:44:11 Randy van heusden: I used meetup too 02:45:10 Randy van heusden: maybe promox 02:46:16 Sean T.: steercom@sluug.org