Differences between revisions 5 and 6
Revision 5 as of 2015-07-02 06:20:21
Size: 2052
Comment:
Revision 6 as of 2015-07-02 06:29:31
Size: 2429
Comment:
Deletions are marked like this. Additions are marked like this.
Line 11: Line 11:
The counters can be turned off and zeroed, and such operations can themselves be counted in hardware. Perhaps this can leak information to an adversary, but not much information if the code is designed correctly. This would also be a tool to design such code. Counter visibility should also have an "off" switch, so it is disabled until the next reset. The counters can be turned off and zeroed, and such operations can themselves be counted in hardware. Perhaps this can leak information to an adversary, but not much information if the code is designed correctly. This hardware feature would also be a debug and evaluation tool to design such code. Counter visibility should also have an "off" switch, such that if a privacy-threatening exploit is discovered, the counters can be disabled during startup until the next reset.   No software-mediated "on" switch, of course; that should only be an obvious user-controlled function like physical power-on.
Line 13: Line 13:
Hardware can also have flaws, of course, but such flaws are difficult to inject with runtime malware. Exploits injected during design and manufacturing are possible, but they are copied by the thousands and millions, and "hold still" for reverse engineering and comparison to gate level specification. Hardware can also have flaws, of course, but such flaws cannot be injected with runtime malware. Exploits injected during design and manufacturing are possible, but they are copied by the thousands and millions, and "hold still" for reverse engineering and comparison to gate level specification.
Line 15: Line 15:
There are subtle ways to inject exploits at the transistor level if you can predict the supply chain and who gets which physical component, but that is very expensive compared to the usual "all parts alike" manufacture and test process. There are subtle ways to inject exploits at the transistor level if you can predict the supply chain and who gets which physical component, but that is very expensive compared to the usual "all parts alike" manufacture and test process.  However, if nobody bothers to reverse-engineer examples of the hardware, nobody can presume the hardware does what it says it does.

Byte Counting

Counting interface and CPU operations to detect malware


Some of the security exploits I read about add additional bytes of malware onto innocent messages.

It is quite cheap to add read-only physical counters to hardware interfaces, and count every operation of every variety that every interface does. If the counters are rarely read, they can be accessed via an on-chip multiplexed serial bus, minimizing the hardware and power cost of reading them.

Hardware can copy the counters to more registers at the start and end of an operation, then compare what the interface does with what it is supposed to do - discrepancies may be merely a flaw in understanding - which should be detected and eliminated during code design. Or an unexpected count difference may indicate that something shady is going on. The counters can be available to multiple otherwise-firewalled processes, useful for white-hat traffic analysis without revealing exactly what is said to whom.

The counters can be turned off and zeroed, and such operations can themselves be counted in hardware. Perhaps this can leak information to an adversary, but not much information if the code is designed correctly. This hardware feature would also be a debug and evaluation tool to design such code. Counter visibility should also have an "off" switch, such that if a privacy-threatening exploit is discovered, the counters can be disabled during startup until the next reset. No software-mediated "on" switch, of course; that should only be an obvious user-controlled function like physical power-on.

Hardware can also have flaws, of course, but such flaws cannot be injected with runtime malware. Exploits injected during design and manufacturing are possible, but they are copied by the thousands and millions, and "hold still" for reverse engineering and comparison to gate level specification.

There are subtle ways to inject exploits at the transistor level if you can predict the supply chain and who gets which physical component, but that is very expensive compared to the usual "all parts alike" manufacture and test process. However, if nobody bothers to reverse-engineer examples of the hardware, nobody can presume the hardware does what it says it does.

After some elaboration and development, I hope server sky components will have such counting capabilities to help with security.

ByteCounting (last edited 2015-07-02 18:11:53 by KeithLofstrom)