data recovery

technical information    

home data recovery diagnosis procedure cost & times contact us form application informations

This page faces some technical matters that concern the hard disk and the rescue of the data on you support magnetic in general.

I list matters essays:

As you/he/she is done inside a hard disk?

Here is an open disk!

Here above a disk IDE of the last generation is represented, deprived of the protective cover (for the most curious it deals with a Maxtor DiamondMax from 3.5", 5400 rpms and SMART). Also the disks SCSIs are constructively similar and you/they can be brought principal parts to this immagine.Le they are contained in a chassis, typically of light league pressofusa, closed by a cover to watertight estate, endowed with gaskets and of a lot of grapevines of fixing. To the inside:

  • the disks: in the model in figure they are four, but you/they can be from 2 in on

  • the principal motor (not visible, because on the opposite side), that brings in rotation the disks

  • the heads, set to the extreme of an arm it brings head: I am a couple for every disk and they act around two faces

  • the motor of movimentazione, that makes to complete a movement along an arc of circle to the arms handed heads

  • the electronic circuit of inside management of the signals from and for the heads, currently constituted by an alone devoted chip

  • the inside wiring: in this case constituted by a flexible printed circuit

  • the connector IDE to 40 poles, standard (or standard SCSI to 50 or wide to 68 poles)

  • the connector of feeding to 4 poles, standard

  • the jumperses of configuration master/slave and for the various options foreseen by the Builder

  • the printed circuit of the controller (not visible, because on the opposite side)

As you/he/she can be seen, the constructive parts of the disk are ridottissime and the operation it is insured from the elevated technology of the materials employees and from the experience that you/he/she is developed in these years and that you/he/she has brought to the actual results. Shortly time has passed by the by now prehistoric 10 great MB as a box from shoes, with times of access of a lot of about ten milliseconds to the 36 GBs in the format 3.5" with inferior times of access to the 9ms. To es., Maxtor, in one publicity of his, it notices as 5 years his/her productive lines they were able of sfornare a disk from 2 GBs every 9 seconds and today is stopped to a disk by 20 GBs every 2 seconds. For this the Builders assure periods of guarantee of 2 or more years, up to 5 or 6 or more for the SCSIs. However it is had to notice that the mortality of the disks, especially in the initial phase of job (childish mortality) it is still well present, even if a lot of redoubt in relationship to the past and the concept of backup must be held well present, if it is desired to work in safety.
Also particular at first sight not evident they are dependent from a very sophisticated technology. To es.., inside the chassis an is introduced' checked atmosphere and the whole assemblage of the parts is effected in deprived rooms of dust, said white rooms, where the percentage of dust has to be inferior to few parts for million (against the many million of the common environment! ). This checked atmosphere is an essential factor for the operation of the disk as the microscopic heads sorvolano to elevated speeds the delicate surfaces of the dishes, without touching her, to few milionesimis of millimeter, effects graces aerodynamic and the presence of the smallest grain of impurity would be harmful as for an auto in run the to be himself/herself/themselves before a boulder. Insofar the opening of the protective cover, in normal environment, even to give a' glance to the inside, provokes the immediate death of the disk (and the to decay some Guarantee!).


The motors
The drive contains two motors: one serves to bring in rotation the dishes, the other to move the heads.
It deals with motors of tall precisone, electronically checked by special integrated circuits.
The disks ask for a feeding to +5V for the logical circuits and +12V for the motors. A standard connector (type AMP) you/he/she has been select in common for all the magnetic device both HDD and CDROM or similar in the format 51/4." The consumption of the disks of the last generations is very contained and the most greater part of the energy is employed really from the motors. The maximum tide is absorbed during the starting of the disk to bring in rotation the dishes (spin-up), after that there will be impulses of tide during the movimentazione of the heads. The least consumption is had during the state of sleep or stand by, in which the motors and the part of control are stopped it prepares him under conditions of least consumption. The following chart brings some values for a comparison:







Then, in the modern drives, the problem of the electric consumption is relatively little main point, while it is being him/it that of the cooling, in how much a certain part of the absorbed energy is dissipated in heat that must effectively be eliminates, punishment a shortening of the life of the disk. Despite the employment of materials to elevated technology, it needs her/it to respect the tolerances micrometriche and the inside job of the parts, necessary to the attainment of elevated performances more and more they make the heat a notable enemy. Up to a little while ago it was common to control's electronics the to effect a thermal ricalibrazione, necessary to maintain the correct alignment between the heads and the dishes, following the due deformations to the heat; this ricalibrazione asks for a momentary arrest in the transfer of the data, while the controller handles the operation. It achieves of it that, for many trials in which the constant flow of data is vital (masterizzazione, audio, digital video) the ricalibrazione, if extended over a certain time, it constitutes a notable damage to the integrity of the same trial.
In the most recent disks this king alignment is effected to the flight (on fly) during the ordinary funzionamentoe and the problem of the loss of synchronism is not verified among anymore the trials and the flow of the data.
Currently the standard dimension is 3.5" of width and 1/2" of thickness and in this format they are available disks IDE up to 26 GBs and disks SCSI up to 36 GBs, while for these last ones, with greater mechanical formats, it persuades her you/they can climb to a lot of about ten GB.


We see in detail some parts:

The printed circuit of the controller
A card contains all the parts that constitute the controller of the disk. In general you/he/she is constituted by a printed circuit boxed in the body of the disk and container a microprocessor, different auxiliary circuits, of the memory, both for inside use is with functions of cache. The circuit develops the followings functions:

  • Control of the motor of rotation of the dishes

  • Control of the movement of the attuatore of the heads

  • Management of the interface with the outside (IDE or other)

  • Management of the functions of energetic saving (if you introduce).

  • Management of the functions of correction of the errors, control of the flow of the data, SMART (if implemented)

Every builder resolves the problem with a different circuit and the same builder often employs different solutions, while you/he/she is staying standardized the system of interfacciamento with the rest of the PC.

The practice they exist different standards for the interface; the principals are:

  • ST506

  • ESDI

  • SCSI

  • AT-Bus, from which are derived then


It is important to consider that a fundamental difference exists between interface and system of control of the disk.
The disks with the old interfaces ST506 or ESDI had the situated controller on the card that was inserted in the bus of the mainboard, while the situated electronics on the real disk had limited functions to the writing and the reading of the data following the commands transmitted by the controller.
In the most recent disks or those said AT-Bus, developed then him as IDE/EIDE/ATA, instead, the whole logic of control is situated on the disk and the system it makes only available a channel of communication constituted by the doors IDE of interface.

The following image is relative to an old model with interface St-506, but it sufficiently exemplifies how much I dictate.
The microprocessor of control of the disk is, obviously endowed with a proper firmware (an inside software of management to the circuit, at times on on EPROM or on ROM, more rarely in Flash), of an own Ram, of one or more clocks and of circuits of opportune I/O.
It is often available also a certain quantity of cache that constitutes a buffer to treat in fluidder way the flow of the data from and toward the disk, usually from 128, 256 or 512kB. Generally, the writing and reading of the data on the disks it is very faster than the transfer of the data themselves through the interface; the cache constitutes therefore a valid reservoir for the momentary parking lot of the data. The use of the cache can reduce of very the time of access to the data. (Attention not to confuse the small cache on the circuit of control of the disk with the cache that the operating system can form for the data management of the hard disk and that you/he/she is drawn in the principal memory of the system). More before other information on the matter.

The printed circuit has been removed by the carcass of the disk and rotated of 90 degrees toward left.
The containing EPROM the firmware of management of the disk can be noticed, close to the CPU (a processor serious Hitachi 63xx).
The motors are checked by opportune drives of power that furnish the necessary signals to the correct operation, under control of the CPU; a bundle of cables colleague the motors to the printed circuit. In the photo it is visible connected only the motor of rotation (spin); the cable of the motor of movimentazione of the heads was too much short to be able to be connected to its connector with the rotated circuit.
To notice the sagomatura of the printed circuit, necessary to make to reenter him/it in the measures typical of encumbrance of the disk.

Gives the complexity of the controller, also the opposite face of the printed circuit is full of components. The photo brings the circuit riposizionato on the disk, as it is normally. They notices a discreet quantity of small component surface mount (to superficial assemblage) and, in the angle in low to the left, the motor of the heads, that, sticking out as thickness, you/he/she has forced the planner to shape around the printed circuit everything; the non rectilinear form of the edge in some points is due to the necessity to leave space to the structure of support of the carcass (not inserted in the photo for greater clarity).
The connector of the interface is of the old type St-506, that requires two cables toward the card of control it posts on a slot of the PC; the contacts are electroplated in gold to improve the quality of the electric connection.

Since control is effected by a microprocessor it is, obviously, necessary a proper firmware (an inside software of management to the circuit, at times on on EPROM or on ROM, more rarely in Flash), of an own Ram, of one or more clocks and of circuits of opportune I/O. It is often available also a certain quantity of cache that constitutes a buffer to treat in fluidder way the flow of the data from and toward the disk, usually from 128, 256 or 512kB. Generally, the writing and reading of the data on the disks it is very faster than the transfer of the data themselves through the interface; the cache constitutes therefore a valid reservoir for the momentary parking lot of the data. The use of the cache can reduce of very the time of access to the data. (Attention not to confuse the small cache on the circuit of control of the disk with the cache that the operating system can form for the data management of the hard disk and that you/he/she is drawn in the principal memory of the system). More before other information on the matter.


The interface is that part of circuit, hardware and software, that it allows the connection of the disk the central unity. Types have been implemented different of it, some of which, with the time, they have become some standards.
Without getting here into greater details, historically the most important standards have been:

  • ST-506. By now obsolete, it was the interface of the first PC. The real controller was a card inserted on the bus of the PC and connected to the disk with two flat cables of different width. The physical limit is of two unities for controller.

  • XTbus and ATbus. In the first PC XT and AT it was an innovative structure in which the card on the bus constituted only a door and the real controller you/he/she was assembled on the disk. You/he/she immediately is developed in the standard IDE.

  • IDE and by-products. The controller is installed on the disk and the interface you/he/she is reduced to a door, usually integrated in the mother card. The connection happens with a solo cable to 40 poles. The physical limit is of two unities for door (Master and Slavics).

  • SCSI. The connection with the PC happens through a specific controller, that allows a macro discussion you command, type bus, on the cable of connection that allows to have up to 7 unities in parallel. The cable, single, you/he/she can be to 50 poles, for the type of interview to 8 bit or 68 poles for that to 16bit (SCSI Wide). More controllers can cohabit on the same sitema increasing the number of unity installabili.


From how much I dictate before, it is by now a datum of fact the presence of one or also more processors on the electronic cards climbed on in the hard disks. Obviously, if a processor is present, there will be also a chip you contain the software for the operation of the complex. This takes the name of firmware.
Substantially it deals with a mini operating system that contains the routineses of control of the hardware, the procedures of coding and the control of the interface toward the principal system.
You/he/she is mentioned to more processors as some functions, the interface, is submitted for instance to a processor, while the complex operations of writing and reading are developed by another unity. The motors are also checked from integrated endowed with an elevated "intelligena" and autonomy.
The firmware can be contained in the processors, that have a programmable inside area as PROM or EPROM, or in an external chip also riprogrammabile (EEPROM, FLASH). You are not thought, however, with this to the possibility of upgrade of the "BIOS" of the disks; different motivations, among which the structure of the hardware and also the necessity to protect the safety of the content from the reverse engeneering they do him that this possibility is not one of the offers of the builders of disks.
For engraved, cases of conflict are verified between the firmwares of disks and those of the BIOSs of the mother cards; in these cases, gives the impossibility to change the firmware on the disks, the solution it is in the upgrade of the BIOSs of the mainboards. If this is not possible, other doesn't stay from unfortunately to do whether to replace one of the two components.
Other problems of the firmware can concern the incompatibility of a drive with another of different brand or problems with the managements DMA (Bus Mastering).


The electronic circuit of inside management of the signals from and for the heads.
The traces the magnetic ones on the disks have dimensions a lot of redoubts and they have to be glances or written in brief times to get some elevated performances. The operation to check the flow of the tide in the heads is realized, in the modern hard disk, with special circuits devoted to this purpose, usually places inside the disk, how much nearer possible to the heads.
This is made necessary by the necessity to reduce to the least one the length of the conductors that you/they bring the signals from and for the heads themselves.
This integrated is found positioned on a flexible printed circuit that also constitutes the connection among the arm it brings heads and the connector that it brings the signals to the principal printed circuit.

In the image you/he/she can clearly be seen the big chip of control of the heads climbed on on the flexible printed circuit, that has been drawn out from the inside of the box of the hard disk. On the flexible circuit other necessary components are noticed to also the operation or resistances and assemblage condensers superficial (the dark rettangolinis with the metallic extremities). Á. the left extremity there is the connector for the connection with the printed circuit of the controller.
On the right side the arm is seen it brings heads with three groups of 2 heads (3 disks).

Digital information are constituted by sequences of 1 and 0 that the electronic circuits turn into levels of tension (1 = tall level, 0 = low level). On the magnetic support information are stored as I impelled magnetic.
The circuit of the heads has, therefore, the purpose to convert the logical levels in magnetic levels and vice versa.
Magnetic information consist in dwarfish areas of the surface of the disk in which a magnetic field is preserved. We can think about dwarfish magnets, wide some about ten milionesimi of millimeter. You think to a normal magnet: it has two poles, said north and south and the magnetic energy (the magnetic field) it flows among the two poles; magnetic information are therefore of the dwarfish magnets. How do you/they do to remain on the disk? For it persuades her/it of some material ones to preserve a part of the magnetic field that has been applied him.
You try to take with a magnet a platforms you tap or nails: many of them, removed once the magnet, they will stay magnetized and they will be able to them it turns to lift other pins or nails. You now suffered to replace the magnet with an electromagnet (the head) and you will have the principle on which the magnetic writing of the disk founds him (and of the floppy, ribbons, the cottages of the mangianastri, etc.).
And it is note thinking about the cassettes audio that we can have a mink it completes of a procedure of magnetic scrittura/lettura, analogous to that of the hard disk. The comparison is suitable, in first appeal; all it takes is changing some expressions as in the chart that follows:

Recorder audio (incision)

Magnetic disk (writing)

The voice (vibration of the air) you/he/she is turned into electric signal through the microphone

The software turns the data into binary logical signals

the circuit of recording elaborates the signal and sends him/it to the head

the circuit of control of the disk elaborates the signal and sends him/it to the controller of the heads

the head converts the electric signal in a magnetic field and him "it engraves" on the ribbon, in the form of magnetic field

the head converts the electric signal in a magnetic field and him "it engraves" on the ribbon, in the form of magnetic field

Once "written", the magnetic material of the ribbon (or of the disk) it will preserve for long time the magnetic fields that constitute the recording; it can be "read" with an inverse procedure:

Recorder audio (reading)

Magnetic disk (reading)

the ribbon flows in front of the head and the variations of the engraved magnetic field they are turned into variations of electric level

the surface of the disk rotates in front of the head and the variations of the engraved magnetic field they are turned into variations of electric level

the circuit of reading elaborates the received signal and amplifies him/it to a suitable level

the circuit of reading elaborates the received signal and turns him/it into digital signals

the loudspeaker turns the electric signal into audible sound (as I enliven mechanic of the air)

the software allows to extract from these digital signals the stored information

Clear, no?
In effects the example is rather ok; to be precise, however, is necessary to add that the ribbon audio contains a magnetic field "continuous", varying as value according to the modulation of the sound that has to preserve. On the disk, instead, some small defined magnetic fields are engraved.

The inspector of the heads has fixed on a flexible printed circuit: it deals with a particular type of printed circuit realized with material extremely flexible and resistant and in degree of million and million of times to be flexed without getting deformed himself/herself/themselves or to break himself/herself/themselves following the movement of the heads.
The signals pre treated by the circuit of control of the heads is transferred then to the principal circuit where you/they are elaborated.
Different methods exist for the parceling, the writing and the reading of the data. The actual tendency is that to always place a greater number of data to parity of magnetic surface; this is gotten with to progress of the constructive technologies and of the materials and with an increase of the complexity of the circuits and the systems of reading and writing, with the introduction of particular heads, of processors and techniques DSP (Digital Signal Processing) a great deal complex.


Coding of the signal.
You/he/she could be brought to think that the polarities north and south of the magnet can be employed as 1 and 0 platforms; in effects the thing is not so simple because through the heads it is possible to read with simplicity not the polarity of the field but its variation, for instance that coop happening a transition from a field directed north-south to one directed south-north.
The connected problems with the maintenance of microscopic magnetic fields, tightly approached one to the other, with opposite polarity it also asks for particularly complex technologies, in how much smaller they are the surfaces interested by the fields (more fields for unity of surface = more information immagazzinabili on the disk) it meets him with the necessity of magnetic materials of quality and purity it excelled and with the construction of extremely small heads, but in degree to produce very intense fields (for the magnetization of the surface in writing) and to notice with safety the micro variations among the fields (during the reading).
Besides this it is always in demand an elevated speed of lettura/scrittura for the greater application of performance; the increase of the transfer installments (quantity of transferred data) it is gotten with a more rapid rotation of the disk so that to bring under the heads a greater surface in the unity of time.
This requires that the data are not simply thrown them, but are encoded (encoding) according to systems and particular algorithms. Besides the codings have the purpose to crowd how much more possible data on the useful surface.
In this sense, an example of "coding" that you/he/she can approximately make the idea of one of the finalities of these systems it is the known compression of the data, perlappunto with "codings" as ZIP, ARJ, etc.
Of thing he treats, in substance? If we think that a whole organized of data (file) it is composed from a certain number of bytes, to es. 100, written on a disk it will occupy a certain surface. If, with some system, I can compress these information so that the spread out content is available in 50 byteses, I will have the possibility to store on the surface of the disk the double one of information (and therefore I will have in a certain sense increased it persuades her/it some disk).

The methods of coding however I have to also deal himself/herself/themselves with the physical problems of the writing and reading of data on a rotating support to high speed; the everything, obviously, maintaining a safety standard and reliability how much taller possible. For instance, a notable problem is that to identify where a field finishes and starts a following of it (to remember himself/herself/themselves that we are speaking of fields of infinitesimal dimensions that rotate under the heads to 5400, 7200 or 10000 turns to the minute!).
For instance, wanting to store a lace of 1 logical we won't be able certain to deposit an equivalent quantity of magnetic fields of the same polarity or intensity; it would be extremely difficult to identify where a field finishes and starts the following one...
A possible solution is that to tie the writing to a clock: every information is spatially written and temporally in narrow bond with a signal of synchronism that helps to identify her/it during the reading, that will be syncronized to the same clock.
More complex other methods of coding have been developed for being able to always store in the disk a greater quantity of data. In substance, to conclude, the technologies hardwares develop him with the purpose to make for unity of surface possible always greater densities of magnetic fields, while the methods of coding try to crowd a greater quantity of information in the fields, a writing and reading making at the same time possible reliable.
In the following paragraphs we see some of the more diffused methods of coding.

Frequency Modulation (MFM)
One of the first systems adopted for codifying the digital signals to save on a magnetic mean have been that of the modulation of frequency (FM Frequency Modulation). The concept is similar to that of the issues I remove FM (Modulation of Frequency). In this method a datum of value 0 are written as two consecutive magnetic fields of opposite polarity, while a 1 are constituted by two fields of the same polarity. The signal of writing is syncronized with a clock: the first magnetic field, correspondent to the first hit of the clock, constitutes him/it "start" of the bit, while the following one is the value.
The definition of modulation of frequency derives from the fact that the reading is "in movement." If we point out with N the inverted field and with P that not inverted, the 0 logical it is representable with NP, while the 1 with NN; a byte composed of 8 bit of value 0 will be NPNPNPNPNPNPNPNP, while a byte of 1 will be NNNNNNNNNNNNNNNN being tied up to a clock; he/she is seen how the 1 representation frequency is double of that of the 0.
Currently FM, broadly used in the first systems of magnetic memory, it is not more proper for the actual solutions. The principal limit is in the necessity of two magnetic fields to define a bit.

Modified Frequency Modulation (MFM)
MF, with to appear some first hard dishes rotating disk, you/he/she is replaced by MFM or modified frequency modulation (Modulation of Modified Frequency) that it reduces the number of the necessary magnetic fields to define a bit inserting only an inverse field in presence of two consecutive zeros. This way doing can be reached also the doubling of the it persuades some disk.
The method MFM has been employed around first hard disk (interface St-506) and it is today still the system of coding for the floppy disk

Run Length Limited (RLL)
Already with the series of hard disk with interface St-506 or SCSI of it persuades superior to the 40MB the method RLL is introduced (Run Length Limited or limited operational length). In effects it doesn't deal with a single method, but of one "family" of methods of coding sophisticated, thesis to overcome the limits imposed by the simple modulation codings of frequency.
RLL operates not on single bit, but on groups of bit, employing both clock and to get packets of data of great compactness that allow a writing and reading more efficient and sure.
RLL has two principal parameters: run length and run limit, from which it derives the name. Run length is the least space (tempo/superficie) among two inverse magnetic fields, while run limit is the maximum allowed. The tempo/spazio among two inverse fields has to be rather small, otherwise the head of reading risks to lose the synchronism with the clock. Contemporarily the heads have to become more and more small to allow the precise writing of the fields. A beautiful match!
The parameters of RLL are express in the form "run length, run limit RLL"; for instance a common type can be 1,7 RLLs.
The sophisticated codings of the type RLLs ask more and more for controller complexes; the electronic cards on board some disks include by now microprocessors, quartzes, memories, encircled devoted.

Partial Response, Maximum Likelihood (PRML)
Having microprocessors on board of the disks can be done a lot. The systems of writing thin to you now approve they found him on the survey "to the flight", through the head, of the variations among the direct and inverse magnetic fields, analyzing the variations in relationship with a clock.

With to increase some density of the data and the speed of rotation of the dishes a limit it is reached beyond which the analysis of the variations of field becomes problematic and the possibility of error is amplified. The signal from the heads becomes less and less clean, less and less digital and never analogical, making difficult to the classical digital circuits the definition of the areas of the elementary magnetic fields that you/they constitute the core of the information.
The builders are to the continuous search of new solutions, mainly based on the digital analysis of the signals, through special says processors DSP (Digital Signal Processor) in degree to operate to high speed on the information coming from the heads.
Quantum has for instance developed a called system Partial Response, Maximum Likelihood (PRML) that it employs complex techniques hardware and software. It is not looked for more than to identify the single fields, but, through the DSPs and opportune algorithms, blocks of analogical data transform him read by the heads (partial response) to determine the correct sequence of bit that has greater probabilities to be that that has produced the writing of that specific sequence of fields (maximum likelihood).
Every builder develops technologies to hoc and the development you/he/she is still in action.


Transfer Rate
The transfer installments or rate of transfer it points out the quantity of data that you/they can be trsferita from and for the disk. It is obvious that a disk with a transfer installments stop it will have best performances of one that exhibit a low value.
Its value depends on many factors; it increases with to increase some speed of rotation, with the improvement of the methods of coding and correction of the errors, the formalities of operation of the interface, etc..
Specific tests exist for determining this factor, but you/they must be taken with the opportune cautions. We see because.
You/he/she has happened that a test on the disk effected with a commercial benchmark has given a good result on the new disk and then, repeated after a certain time, has given cheaper results. How come?
The problem is in the fact that the transfer of the data from the dishes is not constant but it depends on the position of the heads in comparison to the edge of the disk. The peripheral zones have a greater extension and you/they can contain greater density with a speed of greater transfer of those more inside.
This explains the mystery of which above: the first test is effected on an almost empty disk and you/they is employed the external traces, more performantis; the following tests, done when the disk is fuller, they move the area of the test to the inside zones, less performanti () there can be also a relationship 1 to 2!).
The solution is the employment of test that keeps in mind this problem (and in general the to take the results of the tests "cum wheat salis"...).
For engraved, you/he/she must be remembered that also the fragmentation of the files is a cause of the low performances of the tests; before the tests it is opportune to effect a deframmentazione (defrag). Also in the common use it is opportune to sometimes remember himself/herself/themselves deframmentare the disk, both to improve the performances, both to reduce the possibilities of error.


Write Precompensation
Write precompensation is reported to the necessity to vary the parameters of the tide in the heads during the writing on the disks. The old disks used the same number of sectors for trace, independently from the fact that it dealt with external or inside traces. It is evident that on the inside traces, of smaller circumference, the sectors have to be of smaller dimension of those on the most external traces, of greater circumference. In substance, the density of magnetic fields is not uniform on the whole disk. This door to the necessity to compensate somehow this difference that, over a certain trace, the rilettura would make very difficult; therefore, one of the present parameters in the charts of the old disks are the number of trace from which to depart with the compensation (what you/he/she is managed by the system of control really of the disk).
This parameter is not more necessary for the modern disks that use different systems of writing and reading and endowed with very evolved controller.


Another term that applies when it speaks of disks it is the word interleaving (seen translated in Italian with the horrible term interleaving) it points out it needs her/it the presence of one "space" vital among the sectors.
The sectors of a disk are numbered in a logical sequence so that to be able to be addressed without problems during the operations of writing and reading. However it is not said that a logical sector 1 are physically set before the sector 2; this depends on the systems of coding, of management of the disk, etc.. To the practical action a logical sector and the following one, if they were found adjoining, could create problems in the reading! Because this? because it needs to remember that the disk turns to great speed under the head and at the end of the reading of a sector the system it asks for a certain time to organize the read data and this could be such to be made to escape the following sector and to ask for a new turn of the disk for the reading. If, instead, the sectors are not adjoining to es. the sequence is 1 - 3 - 2, read the sector 1, the whole time of the passage of the sector it stays 3 before beginning the 2 reading, leaving so the whole time to the circuits to be able to proceed to the recovery of the data from the sector 2 without asking for another turn of the disk.
Obviously the best case is an interleave of 1, that is all the logical and physical sectors are corresponding; but it is possible solo implying one it persuades of such reading not to have dead times at the end of a sector (to es. exploiting a cache). Otherwise the performances are greatly reduced.
The standard of the old disks was typically of 17 sectors for trace. With an interleave of 1:1, the sequence of the physical and logical sectors corresponds (or 1-2-3-etc.). With an interleave of 2:1 the sequence typical of logical numeration of the physical sectors becomes 1, 10, 2, 11, 3, 12, 4, 13, 5, 14, 6, 15, 7, 16, 8, 17, 9. In this case a non consecutive sector is inserted in the sequence and the logical disposition it is such for which, if the controller asks for the time of passage of a sector to complete the operations on the precedent, the performance on the whole trace it is the possible motto.
According to as the alone disk a specific interleave is built it is able to furnish the maximum performances and it is not said is 1:1. The common values went from 1:1 to 5:1. For this the utilitieses of management of the disks, in the phase of low-level formatting, they foresaw a test (middle analysis) to determine what interleave was the most proper.
In the actual disks the parameter of interleave is not more necessary neither accessible from the outside; all the controls and the parameters of formatting of the disk are planned by the builder and it is not anticipated that the consumer manipulates them in some way.

The fact that some mother cards preserve in the BIOS the utilitieses of low-level management of the disk or toolses are available with analogous functions it doesn't authorize to use them; to remember that it is not possible to format low-level the disks IDE if not with specific tools of the builder, well rarely available to the consumers.


The errors!
But is a disk wrong? Certainly, only that is organized so that the system I/you/he/she don't realize of it.
The causes of error are manifold; modern technologies have pushed to the extreme the materials and the high density of writing, her not perfect uniformity of the magnetic materials, the high speed of rotation and the intense flow of data can give errors origin in the reading. Electromagnetic troubles, current induced, thermal problems are other causes of error.

As an error on the data saved on the disk would be unacceptable, all the builders are hocked to put all the possible solutions to prevent this problem into effect. Obviously we are not talking of due errors to crash of the motor or the heads or to damages to the electronics of control: these put out use the disk partly or totally and for their prevention opportune mechanisms have been implemented (SMART).

The basic system for the revelation of the errors and the correction in transparent way is Etc (Error Correcting Tails). Similar to that implemented in the memories RAM, with analogous functions, it consists in algorithms, to es. the Reed-Solomon, that allow to the electronics of control to correct situations of error owed to the wrong reading of one or more bit. Usually the algorithms found him on the redundance of the information and they foresee sophisticated routines software. A sector typically contains 512 byteses or 4,096 bit; to these other bit are added devoted etc to the. The quantity depends from the used algorithm and from the planning of the system; it is had to mediate between the safety of the correction and the reduction of the space and the performances. When a sector is written, the relative number of bit is written of also Etc; when the sector is reread, the algorithm combines data and Etc and, verified an error, it corrects him/it in the limits imposed by the planner. This operation, as says, it is transparent entirely for the consumer, even if in some disks of the most recent families, the data of intervention of the correction are monitorati from the circuit of control both to activate mechanisms of inside backup (substitution of bad sector with others backup) both to signal the possible serious spoiled future of the disk (SMART).
If the systems of correction of the error are not able to accomplishedly intervene, then the breakdown is brought to the consumer.
A possible sequence of the interventions is:

  • Revelation of the error: to the data read in the sector the procedure you/he/she is applied Etc and, if errors are not verified, the data are envoys to the interface to have made of the system available.

  • Correction of the error: Etc the algorithm corrects the error in reading using the redundant information. A correct error to this level is not considered really an error.

  • Repetition of the reading: if Etc the system has not been able to correct the error because too wide for its possibilities, the following footstep is a new attempt of reading of the sector. This can automatically be served as the circuit of control of the disk. An error is often caused from a problem of the magnetic field or from other non repetitive causes and the rilettura of the magnetic zone it allows the correction of the error. In this case it speaks of data "recovered"o of correct error after a new attempt.

  • Advanced procedures of correction of the error: many builders implement procedures of corrections that involve more sophisticated algorithms and, usually, in degree to correct the error. Because, then, not to directly use them? Because the complexity of the procedure would bring to a deceleration of the transfer of the data that is acceptable for an irregular error, but it would penalize the performances in the current operation. At times the procedures involve also the hardware, replacing, for instance, the damaged sectors with others kept as it reserves.

  • Non corrigible error: if none of the preceding procedures is able to correct the error the driver it will signal the breakdown to the system.

Map of the bad sector
The structure of the old hard disk in which the correspondence among physical and logical parameters was very narrow and that they didn't have the sophisticated actual technologies, one of the typical elements were the presence of a list of bad sector. The builder usually furnished the disk with a sheet of testing comprendente the list of the sectors found defective to the test, or of the areas of the dishes in which defects of metallizzazione made the writing of the data insecure; the consumer would have inserted then these paramentris in the oportunes charts during the low-level formatting del'unitá, to allow the system to exclude the defective areas from the assignment of addresses him logical.
To the actual state, also this function doesn't belong to the necessities of modern hard disk that is structured in different way, with logical assignments of the established physical surfaces during the productive trial and not accessible or modifiable from the consumer.



© CentroRecuperoDati® 1998 - 2005. All rights reserved.