Discussion:
Defragger and SSD defrag ?
(too old to reply)
NoonName
2018-05-08 19:32:10 UTC
Permalink
1) what is the best defragger that will handle Win XP with HDD ?

2) does a laptop with a SSD ever need defragging ? When ?
R.Wieser
2018-05-08 20:01:27 UTC
Permalink
NoonName
Post by NoonName
1) what is the best defragger that will handle Win XP with HDD ?
I don't know about the *best*, but XP carries its own program for it. Aptly
named "defrag.exe" (\windows\system32)
Post by NoonName
2) does a laptop with a SSD ever need defragging ? When ?
Although I'm far from an athority on this matter, SSDs randomize the actual
physical blocks the sectors are written in (to even-out wear-and-tear over
the whole SSD memory). Defragging therefore should not mean anything to
such a drive (you would just move data from one unknown-placed block to
another as-unknown placed block)*.

Besides, SSDs are (AFAIK) random access, and (again) should not benefit from
having sectors placed sequentially (where classical spinning disks certainly
would).

*I could imagine that the SSD recognises a sector-to-sector copy, and will
actually ignore the write (and just point the second block to the first. In
other words, de-duplicate).

Regards,
Rudy Wieser
JJ
2018-05-08 21:52:58 UTC
Permalink
Post by R.Wieser
Besides, SSDs are (AFAIK) random access, and (again) should not benefit from
having sectors placed sequentially (where classical spinning disks certainly
would).
It actually has a benefit, albeit micro or even nano - depending on the CPU
and RAM speed. Non fragmented files don't have the overhead of determining
the next cluster number in order to read data which spans to other fragment.

Imagine an ideal storage where it virtually has zero seek time and has no
wear-and-tear, and it contains 2 files occupying the same number of
clusters. One file is not fragmented, and the other has 1 million or more
fragments. Obviously, the one which is fragmented, would take longer to read
the whole file data.
JJ
2018-05-08 21:57:57 UTC
Permalink
Post by JJ
It actually has a benefit, albeit micro or even nano - depending on the CPU
and RAM speed. Non fragmented files don't have the overhead of determining
the next cluster number in order to read data which spans to other fragment.
Imagine an ideal storage where it virtually has zero seek time and has no
wear-and-tear, and it contains 2 files occupying the same number of
clusters. One file is not fragmented, and the other has 1 million or more
fragments. Obviously, the one which is fragmented, would take longer to read
the whole file data.
I'm not saying that SSDs should be defragged. It's just that defrgragging
SSD has very little benefit considering that SSD has pretty high
wear-and-tear. So, defragging SSD is more of harming than making it better.
Good Guy
2018-05-08 22:02:53 UTC
Permalink
Post by JJ
So, defragging SSD is more of harming than making it better.
Very good news!!!!!. You can sell more HDs and make more profit.

Please don't apologize for asking people to waste time defragging their
HDs. We love them if they do more of it.
Post by JJ
/--- This email has been checked for viruses by
Windows Defender software.
//https://www.microsoft.com/en-gb/windows/comprehensive-security/
--
With over 600 million devices now running Windows 10, customer
satisfaction is higher than any previous version of windows.
R.Wieser
2018-05-09 08:04:34 UTC
Permalink
JJ,
Post by JJ
Obviously, the one which is fragmented, would take longer to
read the whole file data.
Not quite obviously I'm afraid. I think we may assume that an SSD is
random access. That means that the time between reading two sectors next
to each other does not take more time than reading two sectors far apart.
Post by JJ
... has zero seek time ...and it contains 2 files occupying the same
number
of clusters. One file is not fragmented, and the other has 1 million or
more
fragments.
That depends: Are you requesting the sectors one-by-one, or are you doing a
bulk request ?

You see, in the first case any kind of SSD-induced delay will be rather
unnoticable even in regard to the request itself - let alone in regard to
the ammount of returned data.

In the second case you are cheating, as there AFAIK is no way for the
computer to request a non-sequential set of records. :-)

Besides that, what do you think is the chance that the SSD will try to
predict the next sector you will want to fetch and pre-cache (into RAM,
because SSD storage memory is rather slow) ?

In short, while its busy returning the requested sector it will also be busy
pre-resolving the most likely next requests - effectivily reducing the delay
you're referring to to zero for whomever is looking at the returned sectors.

Regards,
Rudy Wieser
JJ
2018-05-09 12:18:42 UTC
Permalink
Post by R.Wieser
JJ,
Post by JJ
Obviously, the one which is fragmented, would take longer to
read the whole file data.
Not quite obviously I'm afraid. I think we may assume that an SSD is
random access. That means that the time between reading two sectors next
to each other does not take more time than reading two sectors far apart.
Post by JJ
... has zero seek time ...and it contains 2 files occupying the same
number
of clusters. One file is not fragmented, and the other has 1 million or
more
fragments.
That depends: Are you requesting the sectors one-by-one, or are you doing a
bulk request ?
You see, in the first case any kind of SSD-induced delay will be rather
unnoticable even in regard to the request itself - let alone in regard to
the ammount of returned data.
In the second case you are cheating, as there AFAIK is no way for the
computer to request a non-sequential set of records. :-)
Besides that, what do you think is the chance that the SSD will try to
predict the next sector you will want to fetch and pre-cache (into RAM,
because SSD storage memory is rather slow) ?
In short, while its busy returning the requested sector it will also be busy
pre-resolving the most likely next requests - effectivily reducing the delay
you're referring to to zero for whomever is looking at the returned sectors.
Regards,
Rudy Wieser
I think you misunderstood. I'm not talking about the storage device, storage
controller, or bus speed, here. I'm talking about the file system. If the
file is not fragmented, the system doesn't need to determine the next
cluster number, when reading a file for e.g. copying. i.e. when reading
data, it only need to increase the next cluster number by one, each time. If
the file is fragmented, it needs to determine the next cluster number from
the cluster allocation list of the file (if NTFS, otherwise from FAT).
Wolf K
2018-05-09 13:52:13 UTC
Permalink
Post by JJ
Post by R.Wieser
JJ,
Post by JJ
Obviously, the one which is fragmented, would take longer to
read the whole file data.
Not quite obviously I'm afraid. I think we may assume that an SSD is
random access. That means that the time between reading two sectors next
to each other does not take more time than reading two sectors far apart.
Post by JJ
... has zero seek time ...and it contains 2 files occupying the same
number
of clusters. One file is not fragmented, and the other has 1 million or
more
fragments.
That depends: Are you requesting the sectors one-by-one, or are you doing a
bulk request ?
You see, in the first case any kind of SSD-induced delay will be rather
unnoticable even in regard to the request itself - let alone in regard to
the ammount of returned data.
In the second case you are cheating, as there AFAIK is no way for the
computer to request a non-sequential set of records. :-)
Besides that, what do you think is the chance that the SSD will try to
predict the next sector you will want to fetch and pre-cache (into RAM,
because SSD storage memory is rather slow) ?
In short, while its busy returning the requested sector it will also be busy
pre-resolving the most likely next requests - effectivily reducing the delay
you're referring to to zero for whomever is looking at the returned sectors.
Regards,
Rudy Wieser
I think you misunderstood. I'm not talking about the storage device, storage
controller, or bus speed, here. I'm talking about the file system. If the
file is not fragmented, the system doesn't need to determine the next
cluster number, when reading a file for e.g. copying. i.e. when reading
data, it only need to increase the next cluster number by one, each time. If
the file is fragmented, it needs to determine the next cluster number from
the cluster allocation list of the file (if NTFS, otherwise from FAT).
I don't think that's how it works. AFAIK, every cluster in a file points
to the next one, and the last one is marked as such. Storing cluster
numbers separately is inefficient, especially for large files. (Eg, a
1GB file on a plain vanilla NTFS partition occupies 250,000 clusters).

Best,
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
R.Wieser
2018-05-09 17:06:38 UTC
Permalink
JJ,
If the file is not fragmented, the system doesn't need to determine
the next cluster number, when reading a file for e.g. copying. i.e.
when reading data, it only need to increase the next cluster number
by one, each time
And in the other case it uses the current sector # and uses it as an index
in a look-up table. I don't think you will notice the difference. Not
even when retrieving a million sectors.

<strikethru>
But yes, when you define the the time consumption of everything else as
being zero - meaning you do not allow something as common as a second
thread/background process - and jack up the ammount of sectors that you are
going to retrieve I guess you could get an actually measurable time
consumption somewhere along the line ...

... but it would be be devoid of any meaning.
</strikethru>



Hold the presses:
I thought that MOVing a register from a table would cost at least double the
cyles of in INC (on an X86) but some googeling seems to show they cost the
same ammount ...

So, the answer is: No difference.

(but I left my origional answer there as "strikethru").

Regards,
Rudy Wieser
Wolf K
2018-05-09 13:39:20 UTC
Permalink
On 2018-05-09 04:04, R.Wieser wrote:
[...]
Post by R.Wieser
In the second case you are cheating, as there AFAIK is no way for the
computer to request a non-sequential set of records.:-)
[...]

The PET/VIC-20/Commodore-64 did. In fact, to read/write you had to
specify whether the data file was serial or random access. I have no
idea how the disk drive handled these differences. The disk drive was a
smart device, seen as destination and source of data by the OS, not as
resource to be managed.

Best,
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Mark Lloyd
2018-05-09 16:26:57 UTC
Permalink
Post by Wolf K
[...]
Post by R.Wieser
In the second case you are cheating, as there AFAIK is no way for the
computer to request a non-sequential set of records.:-)
[...]
The PET/VIC-20/Commodore-64 did. In fact, to read/write you had to
specify whether the data file was serial or random access. I have no
idea how the disk drive handled these differences. The disk drive was a
smart device, seen as destination and source of data by the OS, not as
resource to be managed.
Best,
IIRC, random files (not supported by BASIC except in the C128 and I
think some later PETs) used additional sectors (called "side sectors"?)
to store pointers to the actual data. File types were "P" (used by
save/load), "S", "U", and "L". All except the last (I don't remember why
it was called "L") were actually the same type. I don't remember why
"L", but to open one you had to specify record length (according to the
C128 documentation).
--
Mark Lloyd
http://notstupid.us/

"Put your trust in Allah, but tie up your camel first." -- Arab proverb
R.Wieser
2018-05-09 17:20:20 UTC
Permalink
Wolf,
I have no idea how the disk drive handled these differences.
I know that my C64 "breadbox" stored the next sector number in the first two
byte of the current one. Hence it also returned just 254 bytes per sector.

When doing a sequential read it could therefore go and retrieve the next
sector while waiting for the "current sector OK, give me the next" signal.
Something which ofcourse wasn't possible when doing random access.

The only really "major" thing I did with that drive was to get it to emulate
subdirectories, I felt like quite something that it wanted to work for me.
:-)
The disk drive was a smart device, seen as destination and source of data
by the OS, not as resource to be managed.
Yup. And with the right instruction you could perform a drive-to-drive
copy, leaving your 'puter free for other stuff. Not that you could do much
without a drive, but thats a fully other problem.

Regards,
Rudy Wieser
Mark Lloyd
2018-05-10 14:26:34 UTC
Permalink
On 05/09/2018 12:20 PM, R.Wieser wrote:

[snip]
Post by R.Wieser
Yup. And with the right instruction you could perform a drive-to-drive
copy, leaving your 'puter free for other stuff. Not that you could do much
without a drive, but thats a fully other problem.
Regards,
Rudy Wieser
IIRC, for a dual drive you can use the command "D1=0".

Once at a user group meeting a saw someone had a program that would
allow multiple copies without connecting a computer (just once to load
that program). You put the disk to copy in drive 0 and a blank disk in
drive 1. It starts copying automatically. Drive lights show when it's done.

You could even copy between units (including drive to printer) leaving
the computer free, although the I/O bus would be unavailable to it.
--
Mark Lloyd
http://notstupid.us/

Why be born again, when you can just grow up?
R.Wieser
2018-05-10 19:01:15 UTC
Permalink
Mark,
Post by Mark Lloyd
IIRC, for a dual drive you can use the command "D1=0".
In all my time with the C64 I've only seen a double-drive configuration a
few times. And although I did fork over the money for that "breadbox" C64
drive because I got fed up rather fast with the casettes (always had to
verfy the program - on a medium that was already slow - to be sure it would
"stick". And I learned that the hard way. :-( )
Post by Mark Lloyd
Once at a user group meeting a saw someone had a program that would allow
multiple copies without connecting a computer (just once to load that
program)
As you could upload-end-execute programs onto the drives themselves (which
is what I did to get those "subdirectories" I spoke of earlier) I can easily
imagine that.
Post by Mark Lloyd
You could even copy between units ...
Thats the only way I saw it done.
Post by Mark Lloyd
... leaving the computer free, although the I/O bus would be unavailable
to it.
I once or twice considered throwing something together that would
effectivily create two seperate busses (a couple of 74xx open-collector
driver chips would have done it), but as I never had the pleasure of having
more than one device for that bus I had no reason to build it. Oh well.

Regards,
Rudy Wieser
Mark Lloyd
2018-05-11 15:47:16 UTC
Permalink
Post by R.Wieser
Mark,
Post by Mark Lloyd
IIRC, for a dual drive you can use the command "D1=0".
In all my time with the C64 I've only seen a double-drive configuration a
few times. And although I did fork over the money for that "breadbox" C64
drive because I got fed up rather fast with the casettes (always had to
verfy the program - on a medium that was already slow - to be sure it would
"stick". And I learned that the hard way. :-( )
I had (and, actually, still have) a MSD SD-2 dual drive. It would often
fail because of the connector on the controller where the transformer
secondary was connected. I finally fixed it (where the repair shop
always failed), but by then I wasn't using the C64 much.

[snip]
--
Mark Lloyd
http://notstupid.us/

"The belief in a supernatural source of evil is not necessary; men alone
are quite capable of every wickedness" -- Joseph Conrad
Paul
2018-05-08 20:41:49 UTC
Permalink
Post by NoonName
1) what is the best defragger that will handle Win XP with HDD ?
2) does a laptop with a SSD ever need defragging ? When ?
I don't know why this question is cross-posted to the Win7
group, as you're not asking about Windows 7.

1) The built-in WinXP defragmenter is pretty good. It packs to
the left, and tries not to leave gaps. I've seen worse
defragmenters. And nobody wants to pay $39.95 for some
piece of crap they can't transfer to a second computer.

2) This was answered by a Microsoft employee. Normally an
SSD does not need to be defragmented, because it has
zero seek time. But there is at least one corner case,
where it *might* need to be defragmented.

https://www.hanselman.com/blog/TheRealAndCompleteStoryDoesWindowsDefragmentYourSSD.aspx

"... Storage Optimizer will defrag an SSD once a month
if volume snapshots are enabled. This is by design and
necessary due to slow volsnap copy on write performance...
"

When the file system takes shadow copies into consideration,
the performance of the file system can be degraded with time,
unless the Storage Optimizer "does something" :-) If you don't
use shadow copies, then it should not need to do anything.

WinXP isn't likely to know what an SSD is, so don't
defragment an SSD on purpose there. Later OSes, the ones
that prepare partitions on megabyte boundaries (Vista+)
are more likely to have some logic to identify an
SSD and behave responsibly towards it. WinXP is
too old to handle such a situation well.

As well, if you clone WinXP from a HDD to a new SSD,
you should "re-align" the partition. This improves
performance by putting clusters on flash block
boundaries. It's a bit easier on the drive. I
don't use SSD for WinXP, and wouldn't even think
of doing that. To me, Win10 absolutely needs SSD
for the boot drive, because Win10 is such a maintenance
pig (it's scanning, scanning, scanning all the time).
Save your SSD for the OS that really needs it.

HTH,
Paul
NY
2018-05-09 16:01:47 UTC
Permalink
Post by Paul
Post by NoonName
1) what is the best defragger that will handle Win XP with HDD ?
2) does a laptop with a SSD ever need defragging ? When ?
I don't know why this question is cross-posted to the Win7
group, as you're not asking about Windows 7.
1) The built-in WinXP defragmenter is pretty good. It packs to
the left, and tries not to leave gaps. I've seen worse
defragmenters. And nobody wants to pay $39.95 for some
piece of crap they can't transfer to a second computer.
Assuming it works on XP (and I've not tried it) Piriform's Defraggler is a
good alternative to the built-in defragger for XP. It has the advantage that
you can choose whether to:

1. defrag the whole drive, eliminating any gaps

2. just defrag the fragmented files, leaving gaps

3. defrag the free space which eliminates most but not all gaps

I tend to do 2 then 3 which leaves as much contiguous free space as possible
at the end of the drive, without taking as long as 1. As long as there is
plenty of contiguous space, you can probably get by with just doing 2.

The XP defragger only does the whole lot - the equivalent of 1 - so it is
slow.
David E. Ross
2018-05-08 20:50:21 UTC
Permalink
Post by NoonName
1) what is the best defragger that will handle Win XP with HDD ?
2) does a laptop with a SSD ever need defragging ? When ?
Two actions are really meaningless for SSDs.

As Wieser describes, defragging an SSD will not accomplish anything. By
writing unnecessarily to the SSD, defragging can actually shorten the
useful life of an SSD.

The other action that is meaningless is erasing files. The writing
needed to erase a file might fail to over-write that file. While the
pointer to the file might be erased, the file contents remain untouched.
Only a total erasure of the entire device could have any meaning.
Details about this are at <http://eraser.heidi.ie/>.
--
David E. Ross
<http://www.rossde.com/>

First you say you do, and then you don't.
And then you say you will, but then won't.
You're undecided now, so what're you goin' to do?
From a 1950s song
That should be Donald Trump's theme song. He obviously
does not understand "commitment", whether it is about
policy or marriage.
NY
2018-05-09 15:34:14 UTC
Permalink
Post by David E. Ross
Two actions are really meaningless for SSDs.
The other action that is meaningless is erasing files. The writing
needed to erase a file might fail to over-write that file. While the
pointer to the file might be erased, the file contents remain untouched.
Only a total erasure of the entire device could have any meaning.
Details about this are at <http://eraser.heidi.ie/>.
As with any storage device (whether SSD or HDD), when you erase or overwrite
a file, you are not deleting the contents at that time; instead you are
returning the "sectors" (to use HDD terminology) to a pool which can be used
for a new/updated file at some time in the future.

It still makes sense to erase files that are no longer needed, so as to free
up space and for general housekeeping. But unless you overwrite all the
unused sectors that are not allocated to files/folders (or erase the whole
device, as you say), then there is the possibility that someone may be able
to undelete the file - that applies to HDD as much as to SSD.


So I'd say that defragging an SSD doesn't make sense, but erasing a file
makes as much or as little sense for both HDD and SSD.
Wolf K
2018-05-09 15:44:29 UTC
Permalink
Post by NY
Post by David E. Ross
Two actions are really meaningless for SSDs.
The other action that is meaningless is erasing files.  The writing
needed to erase a file might fail to over-write that file.  While the
pointer to the file might be erased, the file contents remain untouched.
Only a total erasure of the entire device could have any meaning.
Details about this are at <http://eraser.heidi.ie/>.
As with any storage device (whether SSD or HDD), when you erase or overwrite
a file, you are not deleting the contents at that time; instead you are
returning the "sectors" (to use HDD terminology) to a pool which can be used
for a new/updated file at some time in the future.
It still makes sense to erase files that are no longer needed, so as to free
up space and for general housekeeping. But unless you overwrite all the
unused sectors that are not allocated to files/folders (or erase the whole
device, as you say), then there is the possibility that someone may be able
to undelete the file - that applies to HDD as much as to SSD.
So I'd say that defragging an SSD doesn't make sense, but erasing a file
makes as much or as little sense for both HDD and SSD.
Semantics alert:

"Erase" = "Overwrite data"
"Delete" = "Mark filename as Deleted, and mark sectors/clusters as
Available"

Best,
--
Wolf K
kirkwood40.blogspot.com
"The next conference for the time travel design team will be held two
weeks ago."
Paul
2018-05-09 18:20:32 UTC
Permalink
Post by Wolf K
Post by NY
Post by David E. Ross
Two actions are really meaningless for SSDs.
The other action that is meaningless is erasing files. The writing
needed to erase a file might fail to over-write that file. While the
pointer to the file might be erased, the file contents remain untouched.
Only a total erasure of the entire device could have any meaning.
Details about this are at <http://eraser.heidi.ie/>.
As with any storage device (whether SSD or HDD), when you erase or overwrite
a file, you are not deleting the contents at that time; instead you are
returning the "sectors" (to use HDD terminology) to a pool which can be used
for a new/updated file at some time in the future.
It still makes sense to erase files that are no longer needed, so as to free
up space and for general housekeeping. But unless you overwrite all the
unused sectors that are not allocated to files/folders (or erase the whole
device, as you say), then there is the possibility that someone may be able
to undelete the file - that applies to HDD as much as to SSD.
So I'd say that defragging an SSD doesn't make sense, but erasing a
file makes as much or as little sense for both HDD and SSD.
"Erase" = "Overwrite data"
"Delete" = "Mark filename as Deleted, and mark sectors/clusters as
Available"
Best,
You can do both if you want.

You can do a defragmenter run first. Followed by a run of
Sysinternals SDELETE with the -z option to zero white space.
Then, there will be nothing for Recuva to find, and the file
system will be in a "maximally recoverable" state in the
event the partition header got erased or something. Scavenger
file recovery programs work best, if the files were
defragmented before the accident happened.

I don't think anyone has that much of a disk fetish though.

Paul
David E. Ross
2018-05-09 17:45:02 UTC
Permalink
Post by David E. Ross
As Wieser describes, defragging an SSD will not accomplish anything. By
writing unnecessarily to the SSD, defragging can actually shorten the
useful life of an SSD.
Do SSDs require defragging?
No, it is not necessary or recommended to defrag an SSD. Since there
are no physical disks, there is not need to organize the data in
order to reduce seek time. SSDs have TRIM, which serves the same
basic function to make your drive faster without subjecting the drive
to the extra workload. Defragging an SSD will put undue wear and tear
on the drive and may actually shorten its life.
--
David E. Ross
<http://www.rossde.com/>

First you say you do, and then you don't.
And then you say you will, but then won't.
You're undecided now, so what're you goin' to do?
From a 1950s song
That should be Donald Trump's theme song. He obviously
does not understand "commitment", whether it is about
policy or marriage.
Diesel
2018-05-16 22:51:14 UTC
Permalink
Post by David E. Ross
Post by NoonName
1) what is the best defragger that will handle Win XP with HDD ?
2) does a laptop with a SSD ever need defragging ? When ?
Two actions are really meaningless for SSDs.
As Wieser describes, defragging an SSD will not accomplish
anything. By writing unnecessarily to the SSD, defragging can
actually shorten the useful life of an SSD.
The other action that is meaningless is erasing files. The
writing needed to erase a file might fail to over-write that file.
While the pointer to the file might be erased, the file contents
remain untouched.
That depends on the way in which you opted to delete the file. Using
a secure file wiping utility (if properly written and implemented)
will erase the file contents.
Post by David E. Ross
Only a total erasure of the entire device could have any meaning.
Details about this are at <http://eraser.heidi.ie/>.
It's a site for a secure disk wiping utility. One of many which all
do the same thing. It's not necessary to wipe the entire disk out to
whack selected file(s).
--
To prevent yourself from being a victim of cyber
stalking, it's highly recommended you visit here:
https://tekrider.net/pages/david-brooks-stalker.php
===================================================
Cats must try to kill the curlicues of ribbon on the finished
packages.
VanguardLH
2018-05-09 05:58:12 UTC
Permalink
Post by NoonName
1) what is the best defragger that will handle Win XP with HDD ?
^^^^^^
You cross-posted in the wrong newsgroup (Windows 7)

If you choose a 3rd-party defragmenter (e.g., Piriform Defraggler), you
need to disable the boot-time and idle-time execution of Windows own
defrag tool. Defragging with one tool and then with another means they
will keep battling on what they consider the best layout. No defraggers
agree on what is the best layout. You run one and it choose its layout,
then you run another and it changes that layout to what it likes best.
You end up defragging an already defragged drive because the two, or
more, defraggers keep competing with each other. As I recall (I do not
have a Win XP host to check), the idle-time defrag gets added as a
scheduled event in Task Scheduler. The boot-time defrag must be
disabled in the registry. You don't want to be using Windows' defrag
and also some 3rd party defrag only to have them keep undoing what the
other did.

https://tweaks.com/windows/37055/enable-or-disable-boot-defrag/

So what's wrong with using the defrag already included in Windows? The
other layouts preferred by other defraggers is just their arbitrary
choice based on their opinion of what they like and may not be the best
layout for your scenario. While I used Defraggler for awhile (and keep
it installed as an alternative although I haven't used it in years), I
just used the one that comes with Windows. Only if you have some very
special needs, like moving huge-sized files to the "end" (inside and
slower cylinders) of the disk which are rarely ever modified does some
3rd-party layout make sense; however, those probably shouldn't be
wasting space on your OS+app partition and be in their own "data"
partition.

You don't defrag SSDs. There is no advantage but there is one big
disadvantage: accelerated wear on the SSD. Due to oxide stress, there
are a limited number of writes that an SSD can sustain. It uses various
methods, like wear-levelling, in trying to prevent one block of flash
from getting a huge number of writes, like rewriting the same file over
and over.
Post by NoonName
2) does a laptop with a SSD ever need defragging ? When ?
Is Windows XP or 7 running on that laptop? Windows XP does not support
automated TRIM but Windows 7 does. Did you actually yet get an SSD to
put in your laptop? If so, make sure it comes with a utility that lets
you use it to run TRIM on the SSD.

https://en.wikipedia.org/wiki/Trim_(computing)

You might be able to schedule the utility to run TRIM; however, many
such included tools do not have a CLI (command-line interface), so
you'll have to set a reminder for you to periodically (perhaps monthly)
run the utility to exercise its TRIM function.

I don't know the prevalence but due to the lack of TRIM in Windows XP
and other operating systems, and because TRIM will pend until the OS
considers the device as sufficiently idle, SSD drives have their own
in-built GC (garbage collection) to do the TRIM on their own (if idle
long enough). Either the OS can issue an ATA command to tell the drive
to start a TRIM operation or the SSD itself using its firmware can
decide to perform a TRIM operation. However, what I've seen with
firmware-based TRIM is that it is slow to act. That is, it doesn't run
too often. Your SSD will get slower until you leave it idle (which
means leaving your computer powered up and NOT have it go into standby
or hibernate mode so the drive remains up) long enough for the drive
itself to decide it is time for GC.

https://arstechnica.com/gadgets/2015/04/ask-ars-my-ssd-does-garbage-collection-so-i-dont-need-trim-right/

So are you sticking with Windows XP on that laptop or did you cross-post
to the Windows 7 newsgroup because you are contemplating upgrading from
Windows 7? I prefer doing fresh installs of a new OS version rather
than upgrade and lug along the pollution from the old setup, but some
folks want easy and quick (and why fast-food restaurants thrive).
NoonName
2018-05-09 17:00:02 UTC
Permalink
Posted in Win 7 because there is more activity there/here and most of
you in the Win 7 group have or once had Win XP so are very knowledgeable
regarding Win XP.
Win XP group gets little attention.

Read my original post.

Defragger for HDD ! Answered: Piriform.

Would defrag help for SSD ? Answered: minimally and possible too
much wear.

You all must be non native Martian speakers.

So, SSD degree of success depends more on the PC chip set !
I have several Win XP Pro laptops that have Samsung SSDs installed.
Samsung Magician tests and sets them up.
It also identifies that capabilities of the SDD depending on the
laptop's chip set.
The same SSD will run much faster with "better" chip sets.
These laptops are the same manufacturer, Fujitsu.
No way of telling without just trying.
In any case, all laptops with the model Samsung SSD run much better,
faster and are reliable. (Plug for Samsung SSD)
If interested, get the Samsung with the lifetime warranty, by paying a
little more. One package includes a cable to do the HDD to SSD transfer.

I am not in any way affiliated with Samsung, just a very happy Samsung
SSD owner (installed in three laptops).
Good Guy
2018-05-09 17:16:58 UTC
Permalink
Post by NoonName
Win XP group gets little attention.
Best thing is to avoid XP newsgroup completely; It is dead.
Kaput!!!!!!!!!!!

Nobody in their right mind should be using XP; If they are so fond of
their old machines then they should install that crap called Linux. XP
is not supported and it gets no monthly updates.

From: NoonName <***@NoonPlace.com>
Newsgroups: microsoft.public.windowsxp.general,alt.windows7.general
Subject: Re: Defragger and SSD defrag ?
Date: Wed, 9 May 2018 10:00:02 -0700
Organization: Netfront http://www.netfront.net/
Message-ID: <pcv9h1$25q$***@adenine.netfront.net>
References: <pcstvi$1hva$***@adenine.netfront.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 9 May 2018 17:01:21 -0000 (UTC)
Injection-Info: adenine.netfront.net; posting-host="45.26.37.121";
logging-data="2234"; mail-complaints-to="***@netfront.net"
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:49.0) Gecko/20100101 Firefox/49.0
SeaMonkey/2.46
Post by NoonName
/--- This email has been checked for viruses by
Windows Defender software.
//https://www.microsoft.com/en-gb/windows/comprehensive-security/
--
With over 600 million devices now running Windows 10, customer
satisfaction is higher than any previous version of windows.
NY
2018-05-09 21:07:26 UTC
Permalink
Post by Good Guy
Post by NoonName
Win XP group gets little attention.
Best thing is to avoid XP newsgroup completely; It is dead.
Kaput!!!!!!!!!!!
Nobody in their right mind should be using XP; If they are so fond of
their old machines then they should install that crap called Linux. XP is
not supported and it gets no monthly updates.
I still have an XP PC which is used solely for digitising analogue
videotapes to MPG, because its capture card (which isn't supported by later
versions of Windows) gives much better results than more modern USB
adaptors. But I don't connect it to the internet and I only transfer data
(MPG files) via memory stick, so the chances of it becoming infected are
infinitesimal.

Otherwise, yes, XP is too risky nowadays.
Ant
2018-05-10 00:14:26 UTC
Permalink
Post by NY
I still have an XP PC which is used solely for digitising analogue
videotapes to MPG, because its capture card (which isn't supported by later
versions of Windows) gives much better results than more modern USB
adaptors. But I don't connect it to the internet and I only transfer data
(MPG files) via memory stick, so the chances of it becoming infected are
infinitesimal.
Otherwise, yes, XP is too risky nowadays.
It's fine for offline usage. :)
--
Quote of the Week: "Cheerios: Hula-hoops for ants." --unknown
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\ Ant(Dude) @ http://antfarm.home.dhs.org
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
Paul
2018-05-09 18:34:23 UTC
Permalink
Post by NoonName
Posted in Win 7 because there is more activity there/here and most of
you in the Win 7 group have or once had Win XP so are very knowledgeable
regarding Win XP.
Win XP group gets little attention.
Read my original post.
Defragger for HDD ! Answered: Piriform.
Would defrag help for SSD ? Answered: minimally and possible too
much wear.
You all must be non native Martian speakers.
So, SSD degree of success depends more on the PC chip set !
I have several Win XP Pro laptops that have Samsung SSDs installed.
Samsung Magician tests and sets them up.
It also identifies that capabilities of the SDD depending on the
laptop's chip set.
The same SSD will run much faster with "better" chip sets.
These laptops are the same manufacturer, Fujitsu.
No way of telling without just trying.
In any case, all laptops with the model Samsung SSD run much better,
faster and are reliable. (Plug for Samsung SSD)
If interested, get the Samsung with the lifetime warranty, by paying a
little more. One package includes a cable to do the HDD to SSD transfer.
I am not in any way affiliated with Samsung, just a very happy Samsung
SSD owner (installed in three laptops).
Did you align the partition on it ?

You might get some idea, by using PTEDIT32 and looking
at the numbers involved. If a lot of the numbers on the
right are divisible by 63, then you're probably not aligned
optimally for WinXP.

A way to align for free, was to use Macrium Reflect Free
during cloning, which has an align choice box during the clone.

(The seventh frame in this filmstrip, shows the alignment dialog)

https://postimg.cc/image/soq5qlgrx/

Aligning is even useful on 512e drives being used on WinXP.
Lucky for me, the last hard drive I got for WinXP was
a 512n drive. If you need a hard drive today for WinXP,
I recommend a 2TB drive from the WD Gold series, as they're
the last 512n I know of.

Paul
Loading...