One problem: we have built an immense network of supercomputers that is essentially a Commons. An abuse of this Commons that would be ridiculously unprofitable if it had to be carried out by humans -- say, an expected return of one one-hundredth cent per attempt -- is highly attractive to unscrupulous actors who can automate a billion attempts for an expenditure of a few days or weeks worth of setup and expect a hundred thousand dollars of return.
Another problem: there has been little incentive for software developers to guarantee the security (integrity, privacy, trustworthiness) of their products, because they face so little cost for ignoring it. It is usually easier to import a library of functions that someone else has written than to write your own, and it is always easier to send out software that is good enough to work than to spend the time making sure it always works as defined. (We should not be surprised at this: the hardest work is thinking clearly, and all software is a reification of decision-making processes.)
Yet another problem: people naturally ascribe everything in the universe to one of two models: living things, which do things of their own accord, and non-living things, which are only acted upon by external forces. Computational devices violate this conceptual border in a very confusing way. A rock or a windmill always acts the same way when similar events happen, but a computer is so hideously complex that a single person cannot hope to build an internal model of how it will work without either years of study or moving the whole thing into the conceptual realm of living things. But living things have more sensory inputs, have evolution-driven instincts, and are assigned different degrees of trust according to their behavior. Each time a word processing program obeys our orders to change a font size, move the margins or merge in a set of addresses and identifiers to produce customized letters, we increase our trust that it will reliably do what we tell it to do. Each time it offers a spelling correction, we increase our estimate that it is helpful and understanding. Those assessments are horribly misleading when extended to edge cases, and people are very bad at figuring out where those edge cases are.
Cultural expectations play into this story. We believe that a wallet will hold money; a wallet can be stolen; we can prevent the theft of the wallet by hiding it, or by holding on to it firmly, or even by leaving it at home. When we put money into a bank, the people at the bank take the money and keep it safe for us; even in the case of a bank robbery, we expect the government to step in and ensure that all the accounts are made whole. A software wallet might not have any of these attributes, yet we will call it a wallet and expect it to act more or less as a wallet or a bank.
The combination of the three problems leads to the present situation.