• 11 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle














  • With this GPU you can install a media server like Plex or Jellyfin and offload the transcoding on the GPU, but mind you you will still have a high idle load consumption.

    Normally in a headless home server I would need virtualisation and low idle power consumption. So this GPU and PSU are a bit of an overkill if you are not planning to fully utilise them.


  • And you as an analytics engineer should know that already? I am using some LLMs on almost a daily basis, Gemini, OpenAI, Mistral, etc. and I know for sure that if you ask it a question about a niche topic, the chances for the LLM to hallucinate are much higher. But also to avoid hallucinating, you can use different prompt engineering techniques and ask a better question.

    Another very good question to ask an LLM is what is heavier one kilogram of iron or one kilogram of feathers. A lot of LLMs are really struggling with this question and start hallucinating and invent their own weird logical process by generating completely credibly sounding but factually wrong answers.

    I still think that LLMs aren’t the silver bullet for everything, but they really excel in certain tasks. And we are still in the honeymoon period of AIs, similar to self-driving cars, I think at some point most of the people will realise that even this new technology has its limitations and hopefully will learn how to use it more responsibly.





  • For 70€ you can buy an N4000 mini PC with 6Gb of RAM, and 128Gb eMMC. This is just an example from AliExpress. I am not going to post a link here, but I believe you can find even more offers. Additionally, you can buy some thin clients second hand for dirt cheap.

    With the right power supply and BIOS optimisation you can bring their power consumption to be comparable to the power consumption of the Pi, and would be a lot more versatile and powerful as a home/media/automation setup.