![]() ![]() A compromised domain controller could snag a bitwarden vault without ever even running code on the target machine. The encrypted vault itself is available on a domain machine over SMB by default. On a machine that is joined to a domain, Windows backs up those encryption keys to the Domain Controller. There’s another danger, that doesn’t even require access to the the logged-in machine. Even with the Bitwarden vault locked and application closed. So a derived key gets stored to the credential manager, and can be retrieved through a simple API call. ![]() Unfortunately, the Windows credential API doesn’t actually encrypt credentials in a way that requires an additional Windows Hello verification to unlock. You may remember, Bitwarden has an option to use Windows Hello as a vault unlock option. This week, we finally get the inside scoops on some old stories, starting with the Bitwarden Windows Hello problem from last year. Continue reading “This Week In Security: AI Is Terrible, Ransomware Wrenches, And Airdrop” → Posted in Hackaday Columns, News, Security Hacks Tagged ai, airdrop, This Week in Security, vpn But still, this suggests that the long term solution may be “simply” detecting LLM-generated reports, and marking them as spam. is more charitable than I might be, suggesting that LLMs may help with communicating real issues through language barriers. What LLMs do is provide an illusion of competence that takes longer for a maintainer to wade through before realizing that the claim is bogus. There have always been vulnerability reports of dubious quality, sent by people that either don’t understand how vulnerability research works, or are willing to waste maintainer time by sending in raw vulnerability scanner output without putting in any real effort. But as point out, LLMs are not actually AI, and the I in LLM stands for intelligence. There are some big bug bounties that are paid out, so naturally people are trying to leverage AI to score those bounties. This is a bug report that was generated with one of the Large Language Models (LLMs) like Google Bard or ChatGPT. There just doesn’t seem to be a vulnerability here. ![]() This code has pretty robust length checks. Yes, a strcpy call can be dangerous, if there aren’t proper length checks. It’s a 8.6 severity security problem, a buffer overflow in websockets. So first off, go take a look at this curl bug report. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |