What if processor time got cheap?

How would everything have changed if there had been a dramatic revolution in processor technology in relation to development. For example, what if one processor was as strong as a cluster? What will happen to security?

+4
source share
9 answers

Er, this has been repeated many times over the past decades. What happened is that the software was bloated with features that can do little with the work done, but a lot has to do with window decoration and bells.

+11
source

β€œFor example, what if one processor was as strong as a cluster? What will happen to security?”

Ah, children today. I remember when a 1 MHz processor was really something. Now processors are about 2000 times faster. A disc costs 1/10,000 of what it did. And what happened is ... well ... we have faster, less expensive computers.

What was the question again?

Right What will happen to security. Security? This is still a problem. This is a problem, it remains a problem.

No matter how fast Windows gets, it still has VBA and Active X controls and other features that are security nightmares.

+10
source

If?

If?!?

To a large extent, this has been steadily ongoing for at least 50 years.

Significance for security is not a starter, because they mainly relate to the programmer's practice and user behavior.

Even the smallest of the fact that cryptographic security is not a problem as long as current mathematics stick together, and we do not see the development of common quantum processors, because:

  • For public key systems based on large records, the current state has a key generation in a lower order than exhaustive search queries.
  • For symmetric ciphers, problems grow at the same rate.

Observe the terms of this requirement. Neither is guaranteed. But the loss or rupture of the security model at any processor power level.


The increase in horsepower has had a big impact on safety. When computers left the machine branch, it became possible for ordinary users to create huge security holes.

Too bad, many of our lowest level tools assume the conditions obtained before this event, eh?

+7
source

You simply increase the size of the keys used.

+3
source

How would everything have changed if a dramatic revolution had occurred in processor technology regarding development.

This happens when we talk. Multi-core processors require changes in the programming style, to which all programming languages ​​are still adaptable.

Ever wonder why new programming languages ​​quickly advertise their concurrency support? This is because the revolution in the programming language is happening with hardware changes, and they know it. Do not believe me? Think about the big changes to the programming language:

  • FORTRAN and COBOL became popular with the advent of mainframes.
  • PASCAL and C became popular with the advent of mini-computers.
  • C ++ became popular with the advent of microcomputers
  • Java has become popular with the advent of the Internet.

Whatever the next big programming language is likely to have really good concurrency support.

+3
source

ha. When I was a poor graduate student, I worked at the University of Edinburgh for some time on one of the world's first multiprocessor machines. Distributed array processor. That was back in 1981. We talked endlessly about how strong he is and what he can do, advanced calculations and all that (we compared the sequences of biological protein and DNA)

It was a long time ago. Last year I went into a public conversation about the history of supercomputers in Edinburgh. They mentioned DAP, but began to make comparisons with modern computing power with its successor.

And guess how strong it was compared to a modern car? .
.
.
.
.
.
.
.
.
.
.
.
. <br>.
.
Playstation 2.

+3
source

Hasn't this already happened?
CPU processes are now the cheapest resource. Access to memory ranges from several tens to hundreds of processor cycles, access to hardware or thousands of disks. Most consumer applications only marginally benefit from a faster processor.

I still remember the days when we counted the clock cycles, and saving ten cycles using 1k code was a smart idea.

What usually happens is that new architectures take on a new balance as given: new languages, new libraries, new platforms. Most of the codes written to date - consonantly or unconsciously - are written on the assumption that loops are cheap and memory access is not. This is true for application code and platform.

The next major change seems to be parallelization: future code will be written under the assumption that loops are even cheaper when accessing private (rather than shared) data.

+2
source

This is already starting to happen with cheap multi-core processors and the ability to create a cluster of cellular devices, such as PS3, to attack synchronization and related.

You need to do as much as possible to prevent attacks on power / processing in that you do as much as possible, but at the end of the day there is always a weak point that can usually be attacked by brute force with enough power. I believe MD5 started crashing as a result.

In terms of general coding, processing power should not be an excuse for a sloppy slow code. A hardware update should not be considered a silver bullet for poorly designed code.

+1
source

In my opinion, this is already happening with a quad-core processor, cheap memory and virtualization.
This allows you to drastically reduce the number of servers in the organization, which significantly reduced the significant cost.

This means that other costs, such as staffing and networking, will be considered more expensive, which is likely to lead to lower prices for these items.

Take, for example, one company that can move from 150 servers to 10 servers. Thus, the associated equipment, network and operating costs are reduced.

Then think about the software that runs on it, about licensing it and about the salaries of the people who monitor these applications.

I would think that in the future people will have to justify what they spend on software and consulting fees.

+1
source

All Articles