The article you posted is from 2023 and PERA was basically dropped. However, this article talks about PREVAIL, which would prevent patents from being challenged except by the people who were sued by the patent-holder, and it’s still relevant.
The article you posted is from 2023 and PERA was basically dropped. However, this article talks about PREVAIL, which would prevent patents from being challenged except by the people who were sued by the patent-holder, and it’s still relevant.
Thanks for clarifying! I’ve heard nothing but praise for Kagi from its users so that’s what I was assuming, but Searxng has also been great so I wouldn’t have been too surprised if you’d compared them and found its results to be on par or better.
By the way, if you’re self hosting Searxng, you can use add your own index. Searxng supports YaCy, which is an actively developed, open source search index and crawler that can be operated standalone or as part of a decentralized (P2P) network. Here are the Searxng docs for that engine. I can’t speak to its quality as I still haven’t set it up, though.
there is a better open source meta search engines
I already use Searxng and have never used Kagi, but I’m curious why you say that Searxng is “better.” Are you saying that because the quality of the searches is better, because it’s open source and Kagi isn’t, or for some other reason?
Yes, but have you seen some of the decisions the Supreme Court has come up with?
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.
Game Porting Toolkit is designed for developers … but any consumer can use it to play non-Mac games, and it works surprisingly well.
Huh, TIL
Your Passkeys have to be stored in something, but you don’t have to store them all in the same thing.
If you store them with Microsoft’s Windows Hello, Apple Keychain, or Google Password Manager, all of which are closed source, then you have to trust MS/Apple/Google. However, Keychain is end to end encrypted (according to Apple) and Windows Hello is currently not synced to the cloud, so if you trust those claims, you don’t need to trust that they won’t misuse your data. I don’t know if Google’s offering is end to end encrypted, but I wouldn’t trust it either way.
You can also store Passkeys in a password manager. Bitwarden is open source (though they did recently introduce a proprietary, source available SDK), as is KeepassXC. 1Password isn’t open source but can store Passkeys as well.
And finally, you can store Passkeys in a compatible security key, like the YubiKey 5 series keys, which can each store 100 Passkeys. This makes them basically immune to being stolen. Note that if your primary interest in Passkeys is in the phishing resistance (basically nearly perfect immunity to MitM attacks) then you can get that same benefit by using WebAuthn as a second factor. However, my experience has been that Passkey support is broader.
Revoking keys involves logging into the particular service and revoking them, just like changing your password. There isn’t a centralized way to do it as far as I’m aware. Each Passkey is only used for a single service, after all. However, in the same way that some password managers will offer to automatically change your passwords, they might develop a similar for passkeys.
Do any of the iOS or Android apps support passkeys? I looked into this a couple days ago and didn’t find any that did. (KeePassXC does.)
You have your link formatted backwards. It should be Vaultwarden, with the link in the parentheses.
Up until a year ago, the README explicitly said they didn’t claim to be an open source project: https://github.com/jgraph/drawio/commit/8906f90ac0cc50a0c6da77c28cf9b2b2339277b1#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5L10
For starters, it was never “open source”…
From your link:
Instead, as Winamp CEO Alexandre Saboundjian said, “Winamp will remain the owner of the software and will decide on the innovations made in the official version.” The sort-of open-source version is going by the name FreeLLama.
While Winamp hasn’t said yet what license it will use for this forthcoming version, it cannot be open source with that level of corporate control.
If I upload the source code for my project on Github/Forgejo/Gitlab/Gitea and license it under and open source license, allowing you to fork it and do whatever you want (so long as you follow the terms of my copyleft license), and I diligently ensure that code is uploaded to my repository before being deployed, but I ignore all issues, feature requests, PRs, etc., is my project open source?
Yes.
Likewise, if Winamp had been licensed under an open source license, it would have been open source, regardless of how much control they kept over the official distribution.
Winamp wasn’t open source because its license, the WCL, wasn’t open source.
a talking collar isn’t likely to help … if the cat is even willing to wear the thing at all.
“Realistically,” Quagliozzi says, “that collar would just be saying ‘get this fucking collar off me’ all the time.”
You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link
The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.
You probably just need Google One and Youtube Premium, which includes Youtube Music Premium.
Of course, if you don’t care about YouTube Premium, you could instead get a family subscription to a different music streaming service - Spotify, Tidal, and Apple Music are all leagues better than Youtube Music, in my opinion.
I don’t personally recommend Google for anything, to be clear.
Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda
. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support
For the purposes of this project, you could at least reproduce them by running wget
and downloading them from the original projects.
Synthetic media should be required to be watermarked at the source
Bit late for that (even in 2023). Best we could do now is something like public key cryptography, with cameras having secret keys that images are signed with. However:
For artists and photographers with old school cameras (“old school” meaning “doesn’t compute and sign a perceptual hash of the image”), something similar could still be done. Each such person can generate a public / private key pair for themselves and sign the images they’ve created manually. This depends on you trusting that specific artist, though, as opposed to trusting the manufacturer of the camera used.
This isn’t true or how it works, but there is a law being proposed that would sorta make it so: https://arstechnica.com/information-technology/2024/08/senates-no-fakes-act-hopes-to-make-unauthorized-digital-replicas-illegal/
(In the US), your likeness is protected under state laws and due to case law, rather than federal laws, and I don’t know of any such law that imposes a responsibility upon sites like Twitter to take down violations upon your report in the same way that the DMCA does. Rather, they allow you to sue the entity who used your likeness for damages in civil court. That isn’t very useful to Jane when her ex-boyfriend uploads revenge porn of her or to Kate when a random Twitter account deepfakes her face onto a nude.
However, if a picture you have copyright to (like a selfie) is used as an input into an AI, arguably you do have partial copyright to it, as the AI elements are not copyrighted and it could not have been created without your input. As such, I think it would be reasonable to issue a DMCA takedown request if someone posted a nonconsensual deepfake of you, on the grounds that you have a good faith belief that you do have copyright to it. However, if you didn’t take the picture used as an input yourself, you don’t have copyright to it and therefore don’t have partial copyright to the output, either. If it’s a deepfake face swap, then whoever owns copyright of the original scene image/video would also have partial copyright, and they could also issue a DMCA takedown request.
It’s like how they slapped ‘Smart’ on every tech product in the past decade. Even devices that are dumb as fuck are called ‘Smart’ devices.
I’m not a big fan of “Smart” as a marketing term, either, but “Automatable” doesn’t exactly roll off the tongue, and “Connected” doesn’t really have the same appeal. That said, “smart” was used pretty consistently to refer to devices that could be controlled as part of a “smart home.” It wasn’t supposed to refer to a device that itself was intelligent, though.
I always thought of AI as artificial consciousness, an unnatural and created-by-humans self-aware and self-thinking being.
Sounds like you’re thinking of AGI (artificial general intelligence) or that your understanding is based off sci fi as opposed to the academic discipline/field of research, which has been around since the 1950s.
And yes, marketing is often inaccurate… but almost every instance I’ve seen where they say they’re using AI, they were.
In fact stuff like ChatGPT would’ve made more sense to actually be called ‘Smart’ search engines instea of ‘AI’.
IMO “Smart” would be more misleading than “AI,” even if “Smart” didn’t have an existing, unrelated meaning. I do think we could use better words - AI is such a broad category that it doesn’t say much to call a product “AI-powered.” Stable Diffusion and Llama use completely different types of AI, for example. But people broadly recognize the term (even if they don’t understand it properly) and the same can’t be said for terms like “LLM.”
They might be technological achievements, but they’re not AI.
You’re illustrating the AI effect - “discounting of the behavior of an artificial-intelligence program as not “real” intelligence.” AI is used in a ton of different ways that you likely don’t ever think about or even notice.
I recommend reading over at least the introduction to the Artificial Intelligence article on Wikipedia before proclaiming that something that fits cleanly into the definition of AI isn’t AI.
Your comment makes no sense.