Discussion:
About to format the whole laptop, need some partitioning advice.
Anubhav Yadav
2014-02-07 05:01:27 UTC
Permalink
> Defining "desktop" is the tricky bit (to some it only means where the
> box sits). In this instance I've assumed the OP means office apps, bit
> of gaming, internet apps - so I'd go go for a two slice setup, with a
> separate / and /home, with and a swap file. For a similar "desktop" in a
> business environment it'd be SOE rules so I would use more partitions
> (backup and rollout planning would be a nightmare otherwise - not to
> mention support contract negotiation headaches).
>

Hi, I would like to set up an environment for development and programming.
So I guess I will have to seperate out the /usr and / part and keep
/home on a different
partition too.

--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
Anubhav Yadav
2014-02-07 05:08:36 UTC
Permalink
> Simply that, if you intend to take i3, you will have to learn to think
> differently. My opinion is that tiling wm are far more efficient than
> classic stacking window managers, but it indeed changed my habits. Since
> then, for example, I do not use any file explorer, they are slower than
> command line for most things. Of course, you still can use file explorers...
>
So you use only command line for navigation files? Or do you use a command line
file explorer?

> Now, i3 is the one I choose because it did not implied a lot of learning,
> it's configuration file is really clean: no need to learn any programming
> language there, but facts is that it lacks some features against more
> hard-core twm, for example some others have layouts: new windows does not
> just split current container, they are moved in a precise place of the
> screen.
> But anyway, here is a quote from i3-wm.org: "i3 is primarily targeted at
> advanced users and developers.". Gnome users might do not feel good there.
> It provide only a window manager, no menu, no desktop, etc. You will have to
> install those yourself.

All I need is a good network manager. A good notifier so that I get notified of
xchat mentions or usb plugged in notifications.

Now I am also a programming student, so learning a good language for a twm
shouldn't be an overkill.

I face a question now:
1) Should I take time to learn a new twm, or should I install both twm and xfce.

2) i3 vs awesome! Just installed i3, lets see how it fares against awesome.

--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
b***@neutralite.org
2014-02-07 13:32:51 UTC
Permalink
Le 07.02.2014 06:08, Anubhav Yadav a écrit :
>> Simply that, if you intend to take i3, you will have to learn to
>> think
>> differently. My opinion is that tiling wm are far more efficient
>> than
>> classic stacking window managers, but it indeed changed my habits.
>> Since
>> then, for example, I do not use any file explorer, they are slower
>> than
>> command line for most things. Of course, you still can use file
>> explorers...
>>
> So you use only command line for navigation files? Or do you use a
> command line
> file explorer?

I tried mc. I did not liked it, so I only use bash.

>> Now, i3 is the one I choose because it did not implied a lot of
>> learning,
>> it's configuration file is really clean: no need to learn any
>> programming
>> language there, but facts is that it lacks some features against
>> more
>> hard-core twm, for example some others have layouts: new windows
>> does not
>> just split current container, they are moved in a precise place of
>> the
>> screen.
>> But anyway, here is a quote from i3-wm.org: "i3 is primarily
>> targeted at
>> advanced users and developers.". Gnome users might do not feel good
>> there.
>> It provide only a window manager, no menu, no desktop, etc. You will
>> have to
>> install those yourself.
>
> All I need is a good network manager. A good notifier so that I get
> notified of
> xchat mentions or usb plugged in notifications.

Wicd works like a charm, but I only use it when I need to connect to
non protected wifi network, because I do not know how to configure by
hand those networks :)
About notification notifier, i3 can notify you when a window ask
attention, so no problem for xchat I guess. For usb notifications, I do
not know, never really minded, I mount storage myself, and for other
hardware, udev does what it have to do alone.

> Now I am also a programming student, so learning a good language for
> a twm
> shouldn't be an overkill.
>
> I face a question now:
> 1) Should I take time to learn a new twm, or should I install both
> twm and xfce.
>
> 2) i3 vs awesome! Just installed i3, lets see how it fares against
> awesome.
>
> --
> Regards,
> Anubhav Yadav
> Imperial College of Engineering and Research,
> Pune.
Chris Bannister
2014-02-07 23:42:33 UTC
Permalink
On Fri, Feb 07, 2014 at 10:38:36AM +0530, Anubhav Yadav wrote:
> I face a question now:
> 1) Should I take time to learn a new twm, or should I install both twm and xfce.

apt-cache show twm, there is only one! :)

--
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the
oppressing." --- Malcolm X
Klaus
2014-02-08 00:27:55 UTC
Permalink
On 07/02/14 23:42, Chris Bannister wrote:
> On Fri, Feb 07, 2014 at 10:38:36AM +0530, Anubhav Yadav wrote:
>> I face a question now:
>> 1) Should I take time to learn a new twm, or should I install both twm and xfce.
> apt-cache show twm, there is only one! :)
>

Ah, Friday night
apt-cache search twm !
Chris Bannister
2014-02-08 01:41:29 UTC
Permalink
On Sat, Feb 08, 2014 at 12:27:55AM +0000, Klaus wrote:
>
> On 07/02/14 23:42, Chris Bannister wrote:
> >On Fri, Feb 07, 2014 at 10:38:36AM +0530, Anubhav Yadav wrote:
> >>I face a question now:
> >>1) Should I take time to learn a new twm, or should I install both twm and xfce.
> >apt-cache show twm, there is only one! :)
> >
>
> Ah, Friday night

We're way ahead of you guys! :)

> apt-cache search twm !

Eeee-Arrrgg! What happened to "Tom's Window Manager"?

http://www.americantrails.org/resources/ManageMaintain/rulesrec.html
(sorry about the oxymoron, it was the first reference I found.)

e.g.
http://whatculture.com/tv/tv-debate-the-british-series-vs-the-american-season.php


--
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the
oppressing." --- Malcolm X
Scott Ferguson
2014-02-07 11:40:56 UTC
Permalink
resent to list

-------- Original Message --------
Subject: Re: About to format the whole laptop, need some partitioning
advice.
Date: Fri, 07 Feb 2014 22:25:18 +1100
From: Scott Ferguson <***@gmail.com>
To: Anubhav Yadav <***@gmail.com>

On 07/02/14 16:01, Anubhav Yadav wrote:
>> Defining "desktop" is the tricky bit (to some it only means where the
>> box sits). In this instance I've assumed the OP means office apps, bit
>> of gaming, internet apps - so I'd go go for a two slice setup, with a
>> separate / and /home, with and a swap file. For a similar "desktop" in a
>> business environment it'd be SOE rules so I would use more partitions
>> (backup and rollout planning would be a nightmare otherwise - not to
>> mention support contract negotiation headaches).
>>
>
> Hi, I would like to set up an environment for development and programming.
> So I guess I will have to separate out the /usr and / part and keep
> /home on a different
> partition too.
>

Not mandatory, given the size of cheap external drives, but handy *if*
you set the appropriate sizes (you may only know the right size from
personal experience). I'd strongly recommend a separate /home.
In general I use multiple slices, if only because it's easier to hunt a
fish in ponds than in oceans (poor analogy for indexes).


Kind regards
Joel Rees
2014-02-07 17:41:59 UTC
Permalink
butsu butsu butsu butsu ...

On Wed, Feb 5, 2014 at 3:33 PM, Anubhav Yadav <***@gmail.com> wrote:
> Hello list,
>
> I have an Asus laptop, with 720 gigs hardisk and i5 processor.
> Right now I have a dual boot of Windoze (only for playing fifa
> and assassins creed) and debian wheezy 64 bit.

Someone suggested VMs, and I'll second that suggestion, except reverse
the idea about making MSWindows the primary domain.

Don't mean offense to whoever posted that, but it does not make any
sense to me. Use the system you have confidence in as your primary
domain.

You should not be using the primary domain on a day-to-day basis, BTW.
Any way you look at it, if you're doing VMs, you want the system you
work in to be a VM instance. makes things much easier to manage. But
the advice below does not fully take that kind of thing into account.

Do you have install media for your MSWindows? (The answer to that also
changes some of the rest of the advice ever-so-slightly.)

> Debian takes a lots of time for booting up and some folks on irc
> said that I should be trying systemd. I did that but there was no
> improvement. So some people also suggested that my partitions
> are somehow not right.

IRC can get good information and bad information. Same here, of course.

30 seconds after login with Gnome 3 is not that bad, especially with a
5400 RPM notebook-class HD.

Someone asked how much RAM you have. How much? 1G is not enough with Gnome 3.

More than 4G is more than is necessary under many "normal" loads, but
if you don't have 4G, 4G is reasonable. If you can add memory or
replace what you have and have the money to spare.

The reason that "4G" of RAM is based on powers of two where "720G" of
HD is based on powers of ten has to do with the way RAM is laid out in
the semiconductor and the way tracks are laid out on disks, BTW. (Not
that you seem to be worried about the distinction between GB and GiB
in the modern parlance of marketing.)

> So now since I am about to partition I would like to know what should
> be the ideal partitioning scheme.
>
> Here is the screenshot of my current partitions.
> http://i.imgur.com/YI4a1oU.png

What are Neo and Workstation for? (May I ask?)

Some of the numbers look like a bit of overkill in some respects, but
they shouldn't really be causes of slow boots (in and of themselves).

The sizes, mixed with other issues could, however, induce issues.

> There was a tool which gave the read-write speeds of my hdd,
> that was mentioned by the guys in irc, I cant remember now, and
> the speeds were very low.

Did that tool also have diagnostics? Was it the ASUS provided tool?
Did you run some tool to check that your HD is not having smartdrive
issues? (Ergo, not dying an untimely death.)

> So these are the questions:
>
> 1) What partitioning scheme should I choose now, If I want to have
> /home, /var, /usr, /tmp on different partitions and I just want a windoze
> partition of 50-60 gb.

Suggestions from me (and no reasons to trust me more than anyone else, perhaps):

/ (root partition) should be at least large enough to handle a /var
gone out of control if /var doesn't mount, or if you don't have a
separate /var. Minimum 4G (base ten or base two, either way). I'd go
with 8G, since you're starting with a drive bigger than 120G. Larger
if you do choose to combine /usr and /var, and so forth, with the root
partition.

/etc? I've seen recommendations to separate /etc as a partition. It's
a bit of a trap for a home-use machine, don't do it this time around.
Keep /etc with /.

/bin? /sbin? /ilb? Keep these combined with / unless you like to
confuse the kernel when it i trying to boot and can't find any of the
standard tools or even some of the libraries it needs to boot even to
single-user mode these days.

/usr? There are strange things that happen to Red Hat (Fedora, etc.)
style machines during boot that indicate against /usr being separate.
I've been bitten by them on Fedora, which is one of the reasons I am
using Debian now.

I keep /usr separate because it tends to change a lot when you install
and remove packages. It's that simple.

However, you don't need more than 32G for /usr unless you really go
crazy installing (literally) every package available, and installing a
lot of packages is one good way to slow your machine down on boot and
login. (Of course, if you don't install lots of stuff, you never get
to play with it and discover new tools. :-/)

Well, be a bit careful what you install beyond what you know you
need, but not too careful. Anyway, 32G for /usr should not be
overkill, and won't be too time consuming when it has to be fsck-ed.

fsck demonstrates one good reason to keep partitions small. Large
partitions take longer on fsck and similar maintenance.

And if you ever have to search for lost text files with testdisk or
such, larger than 32G can be a real, serious show stopper. (I gave up
when I lost two-days' work to a bad Makefile just two weeks back,
because the files were text files and too small for testdisk to see in
the partition they were in. Could have resorted to lower-level
techniques like grep /dev/sda3 or hexdump -C | grep, but I decided it
would be faster to use my gray-matter and type the stuff in from
scratch. It was. (First time took two days to produce the files,
second time took four hours, and was good for checking my work.
Programming is like that in a lot of cases. 8-o)

Unfortunately, /usr/bin, /usr/sbin, and /usr/lib may contain stuff
that the boot-up process wants to use, and may thus cause problems if
they can't be mounted. As I say, I saw that on Fedora. Not on Debian.
That says something about the differences between the two, I think.

(Fedora folks keep themselves busy inventing solutions to problems of
their own making, it seems to me. That's okay for them. Maybe someday
in the distant future, the side-tour on systemd will bear meaningful
fruit. I don't expect it to happen this year or next, and that's
another reason I'm using Debian now. They can have their fun. I need
to focus on other things, myself.)

So, for Debian, I recommend a separate /usr, 32G since you have it.

/tmp? Some say it is not used any more. I say give it at least 8G. 16G
since you have a big HD to start with. I've used it on odd occasions,
and making it too small is bad news. Keeping it separate, so that you
can separate the stuff that changes from the stuff that doesn't is
still a good idea.

/var? Similar to my advice on /tmp, separate partition to separate the
stuff that changes a lot from the stuff that doesn't change as much.

But /var tends to stay around, where /tmp is supposed to be cleared
(or clear-able) on boot. So, I'd recommend 16G or even 32G for /var.

/var gets really hammered on system updates. /var/log gets filled up
quickly when things go south.

And there is /var/tmp which gets used more than /tmp these days. (Both
still get used, even though there are those who claim that ramdisk
temporary files make more sense. Such arguments tend to ignore certain
real-world issues and practices.)

On Fedora, I'd recommend a separate 16G partition for /var/tmp, but
separating /var/tmp is not necessary on Debian.

/home -- 32G. Yes, that will get filled up. That's a good thing,
because you then see what you have that needs to be backed up and what
you have that just needs to be deleted.

If you have reason to host your own http or ftp servers, you might
wish to allocate the base directories for those as /var/ftp and
/var/www or the like. Oh, yeah, samba (or whatever Redmond says that
should be called these days) and nfs shares, too. (And netatalk?) If
you do host services, you probably want to mount their base
directories as separate partitions.

That leaves you with a huge unallocated piece of your hard disk. This
is a good thing. It helps you see what you are using, where, and why.
And when you need to adjust things, you have unused disk space to
partition a bit more out of and mount somewhere (such as, say,
/var/www).

su -h and df -h are good tools to help you see what's being used
where. Check the man pages for them out.

LVM versus DOS extended? I like both. Probably not on the same
machine. LVM has flexibility, since, if you discover that 16G for /var
won't carry you through a system upgrade, you can simply add space to
/var instead of copying sub-directories to their own partitions. Be
aware, however, that too much playing with LVM to adjust your
partition sizes will definitely slow your file system down.

Oh, and if you leave yourself a lot of unallocated disk space, that
leaves more room for VMs, later.

> 2) [...(has been answered)]

One more BTW -- You do want to purchase an external HD for backup.
You'll be much less stressed out when things don't do what you expect
them to, and, if you are studying engineering stuff, you have to get
used to the idea that things don't happen the way you expect.

The creativity you learn when you have no backup and the system
doesn't boot for some semi-trivial reason is good, but backup is
usually better.

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.
Anubhav Yadav
2014-02-08 10:45:03 UTC
Permalink
> Someone asked how much RAM you have. How much? 1G is not enough with Gnome 3.
>
> More than 4G is more than is necessary under many "normal" loads, but
> if you don't have 4G, 4G is reasonable. If you can add memory or
> replace what you have and have the money to spare.

I have 8 GB ram :)

> What are Neo and Workstation for? (May I ask?)

They contained movies and PDF and were windows partitions and will be going off.

> Did that tool also have diagnostics? Was it the ASUS provided tool?
> Did you run some tool to check that your HD is not having smartdrive
> issues? (Ergo, not dying an untimely death.)
No it just gave me the copying speed of HDD. Its a standard linux
utility but I forgot the name.

> Suggestions from me (and no reasons to trust me more than anyone else, perhaps):
>
> / (root partition) should be at least large enough to handle a /var
> gone out of control if /var doesn't mount, or if you don't have a
> separate /var. Minimum 4G (base ten or base two, either way). I'd go
> with 8G, since you're starting with a drive bigger than 120G. Larger
> if you do choose to combine /usr and /var, and so forth, with the root
> partition.
>
> /etc? I've seen recommendations to separate /etc as a partition. It's
> a bit of a trap for a home-use machine, don't do it this time around.
> Keep /etc with /.
>
> /bin? /sbin? /ilb? Keep these combined with / unless you like to
> confuse the kernel when it i trying to boot and can't find any of the
> standard tools or even some of the libraries it needs to boot even to
> single-user mode these days.
>
> /usr? There are strange things that happen to Red Hat (Fedora, etc.)
> style machines during boot that indicate against /usr being separate.
> I've been bitten by them on Fedora, which is one of the reasons I am
> using Debian now.
>
> I keep /usr separate because it tends to change a lot when you install
> and remove packages. It's that simple.
>
> However, you don't need more than 32G for /usr unless you really go
> crazy installing (literally) every package available, and installing a
> lot of packages is one good way to slow your machine down on boot and
> login. (Of course, if you don't install lots of stuff, you never get
> to play with it and discover new tools. :-/)
>
> Well, be a bit careful what you install beyond what you know you
> need, but not too careful. Anyway, 32G for /usr should not be
> overkill, and won't be too time consuming when it has to be fsck-ed.
>
> fsck demonstrates one good reason to keep partitions small. Large
> partitions take longer on fsck and similar maintenance.
>
> And if you ever have to search for lost text files with testdisk or
> such, larger than 32G can be a real, serious show stopper. (I gave up
> when I lost two-days' work to a bad Makefile just two weeks back,
> because the files were text files and too small for testdisk to see in
> the partition they were in. Could have resorted to lower-level
> techniques like grep /dev/sda3 or hexdump -C | grep, but I decided it
> would be faster to use my gray-matter and type the stuff in from
> scratch. It was. (First time took two days to produce the files,
> second time took four hours, and was good for checking my work.
> Programming is like that in a lot of cases. 8-o)
>

I also like to be a good programmer :)


> Unfortunately, /usr/bin, /usr/sbin, and /usr/lib may contain stuff
> that the boot-up process wants to use, and may thus cause problems if
> they can't be mounted. As I say, I saw that on Fedora. Not on Debian.
> That says something about the differences between the two, I think.
>
> (Fedora folks keep themselves busy inventing solutions to problems of
> their own making, it seems to me. That's okay for them. Maybe someday
> in the distant future, the side-tour on systemd will bear meaningful
> fruit. I don't expect it to happen this year or next, and that's
> another reason I'm using Debian now. They can have their fun. I need
> to focus on other things, myself.)
>
> So, for Debian, I recommend a separate /usr, 32G since you have it.
>
> /tmp? Some say it is not used any more. I say give it at least 8G. 16G
> since you have a big HD to start with. I've used it on odd occasions,
> and making it too small is bad news. Keeping it separate, so that you
> can separate the stuff that changes from the stuff that doesn't is
> still a good idea.
>
> /var? Similar to my advice on /tmp, separate partition to separate the
> stuff that changes a lot from the stuff that doesn't change as much.
>
> But /var tends to stay around, where /tmp is supposed to be cleared
> (or clear-able) on boot. So, I'd recommend 16G or even 32G for /var.
>
> /var gets really hammered on system updates. /var/log gets filled up
> quickly when things go south.
>
> And there is /var/tmp which gets used more than /tmp these days. (Both
> still get used, even though there are those who claim that ramdisk
> temporary files make more sense. Such arguments tend to ignore certain
> real-world issues and practices.)

Why is it that all partitions you are recommending are multiples of 2?
Any specific reason behind this?

>
> On Fedora, I'd recommend a separate 16G partition for /var/tmp, but
> separating /var/tmp is not necessary on Debian.
>
> /home -- 32G. Yes, that will get filled up. That's a good thing,
> because you then see what you have that needs to be backed up and what
> you have that just needs to be deleted.
>
> If you have reason to host your own http or ftp servers, you might
> wish to allocate the base directories for those as /var/ftp and
> /var/www or the like. Oh, yeah, samba (or whatever Redmond says that
> should be called these days) and nfs shares, too. (And netatalk?) If
> you do host services, you probably want to mount their base
> directories as separate partitions.
>
> That leaves you with a huge unallocated piece of your hard disk. This
> is a good thing. It helps you see what you are using, where, and why.
> And when you need to adjust things, you have unused disk space to
> partition a bit more out of and mount somewhere (such as, say,
> /var/www).
>
> su -h and df -h are good tools to help you see what's being used
> where. Check the man pages for them out.
>
> LVM versus DOS extended? I like both. Probably not on the same
> machine. LVM has flexibility, since, if you discover that 16G for /var
> won't carry you through a system upgrade, you can simply add space to
> /var instead of copying sub-directories to their own partitions. Be
> aware, however, that too much playing with LVM to adjust your
> partition sizes will definitely slow your file system down.
>
> Oh, and if you leave yourself a lot of unallocated disk space, that
> leaves more room for VMs, later.
>
>> 2) [...(has been answered)]
>
> One more BTW -- You do want to purchase an external HD for backup.
> You'll be much less stressed out when things don't do what you expect
> them to, and, if you are studying engineering stuff, you have to get
> used to the idea that things don't happen the way you expect.

It was right there top on my list of purchases.Suddenly something really
bad happened today, and all my budget is shattered. You have given such
a detailed description of everything and I cannot try anything.
Joel Rees
2014-02-10 11:59:43 UTC
Permalink
Do you have a live CD/DVD/USB/SD? You might ask your friend to let you
download Knoppix, for instance, and burn it to a CD.

Or the install disk should do, really, if you are comfortable with the
command line.

Go into the BIOS with the drive detached. Set the BIOS to boot to
install media first, whether you are using CD/DVD or a flash device.
Then let it boot the live/install, just to be sure it does. (I'm
thinking it might.)

If it doesn't boot a live CD/DVD/USB/SD image, well, we'll think about
that if it comes to that.

If it boots the live/install image, power back down the proper way and
attach the hard disk that is being recalcitrant. Then watch it like a
hawk while it boots, to be sure you got the boot priorities in the
BIOS right.

BIOSses can be recalcitrant, too. It might take several tries to get
the priorities right.

If it really does hang the system to have the HD plugged in when you
are booting a live image or an install image, given the history you've
described, I'm going to suggest giving it about 10% odds that the HD
was already failing. Not positive, but a possibility.

Joel Rees
Anubhav Yadav
2014-02-10 14:25:38 UTC
Permalink
Hello everyone, I have got some good news.
The hdd is now back from the dead,alive and working well.

I went to my friend who had that casing. He attached my hdd to
his windows 7 machine and as usual, it didn't show up.

So I had a debian laptop with me , and I connected the hdd to that
laptop and viola, dmseg showed this

> [ 101.317618] Initializing USB Mass Storage driver...
> [ 101.317784] scsi6 : usb-storage 2-1.2:1.0
> [ 101.317865] usbcore: registered new interface driver usb-storage
> [ 101.317867] USB Mass Storage support registered.
> [ 102.316522] scsi 6:0:0:0: Direct-Access Mass Storage Device PQ: 0 ANSI: 0
> [ 102.318131] sd 6:0:0:0: Attached scsi generic sg2 type 0
> [ 102.318814] sd 6:0:0:0: [sdb] 1465149166 512-byte logical blocks: (750 GB/698 GiB)
> [ 102.319444] sd 6:0:0:0: [sdb] Write Protect is off
> [ 102.319459] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00
> [ 102.320084] sd 6:0:0:0: [sdb] No Caching mode page present
> [ 102.320091] sd 6:0:0:0: [sdb] Assuming drive cache: write through
> [ 102.322308] sd 6:0:0:0: [sdb] No Caching mode page present
> [ 102.322316] sd 6:0:0:0: [sdb] Assuming drive cache: write through
> [ 102.404425] sdb: sdb1 sdb2 sdb3 < sdb5 sdb6 sdb7 sdb8 sdb9 sdb10 sdb11 sdb12 sdb13 sdb14 sdb15 sdb16 sdb17 sdb18 sdb19 sdb20 sdb21 sdb22 sdb23 sdb24 sdb25 sdb26 sdb27 sdb28 sdb29 sdb30 sdb31 sdb32 sdb33 sdb34 sdb35 sdb36 sdb37 sdb38 sdb39 sdb40 sdb41 sdb42 sdb43 sdb44 sdb45 sdb46 sdb47 sdb48 sdb49 sdb50 sdb51 sdb52 sdb53 sdb54 sdb55 sdb56 sdb57 sdb58 sdb59 sdb60 sdb61 sdb62 sdb63 sdb64 sdb65 sdb66 sdb67 sdb68 sdb69 sdb70 sdb71 sdb72 sdb73 sdb74 sdb75 sdb76 sdb77 sdb78 sdb79 sdb80 sdb81 sdb82 sdb83 sdb84 sdb85 sdb86 sdb87 sdb88 sdb89 sdb90 sdb91 sdb92 sdb93 sdb94 sdb95 sdb96 sdb97 sdb98 sdb99 sdb100 sdb101 sdb102 sdb103 sdb104 sdb105 sdb106 sdb107 sdb108 sdb109 sdb110 sdb111 sdb112 sdb113 sdb114 sdb115 sdb116 sdb117 sdb118 sdb119 sdb120 sdb121 sdb122 sdb123 sdb124 sdb125 sdb126 sdb127 sdb128 sdb129 sdb130 sdb131 sdb132 sdb133 sdb134 sdb135 sdb136 sdb137 sdb138 sdb139 sdb140 sdb141 sdb142 sdb143 sdb144 sdb145 sdb146 sdb147 sdb148 sdb149 sdb150 sdb151 sdb152 sdb153 sdb154 sdb155 sdb156 sdb157 sdb158 sdb159 sdb160 sdb16

So I fired up gparted, and it showed my 698 GB hdd, without a GPT.
So I created a GPT,and have now installed Windows 7, on a 80 GB partition.

I haven't run any tests on the hdd as such, it seem to work nice.
Please suggests some tests I could do using standard linux tools, if possible
It will help us benchmark the hdd!

Windows was very bad again and it created two primary partitions,one 79 GB
for the windows, and the other 86 mb for bootloader.

So tomorrow, I will be installing debian on the remaining space.

Now my question is, What should I choose? Logical volume? or Primary Volume?
I think I cannot create more than 4 primary volumes on a hdd, and
since two are already
occupied, I should create
/home, /, /usr and /var all logical volumes. (Choice of partitions on
the basis of discussions above)

Will choosing logical volumes harm any performance on the hdd?

I have also decided to choose xfce as my primary desktop,
unless I get more knowledgeable in using tiling window managers.

Apart from all that I would like to sincerely thank each and everyone of
you, I never felt that I was alone, always cheered up by you guys.
No wonder hdd are cheap in some countries say 100-150 $, but here
in this country, I just could not have afforded an hdd without getting
some credit from others.

So you guys do not know how much happy I am right now, and how much
relieved I am feeling now! Thanks a ton for my support. I can setup my debian
box now and prepare for gsoc! :)


> Do you have a live CD/DVD/USB/SD? You might ask your friend to let you
> download Knoppix, for instance, and burn it to a CD.
>

> If it boots the live/install image, power back down the proper way and
> attach the hard disk that is being recalcitrant. Then watch it like a
> hawk while it boots, to be sure you got the boot priorities in the
> BIOS right.
>
> BIOSses can be recalcitrant, too. It might take several tries to get
> the priorities right.

Thanks Joel, I had never heard of this tool, but luckily for me, the situation
of my hardisk was not so bad,

But I really appreciate the time and effort you guys take to reply to me.

--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
Chris Bannister
2014-02-11 01:04:38 UTC
Permalink
On Mon, Feb 10, 2014 at 07:55:38PM +0530, Anubhav Yadav wrote:
> Hello everyone, I have got some good news.
> The hdd is now back from the dead,alive and working well.

Alright!

> I went to my friend who had that casing. He attached my hdd to
> his windows 7 machine and as usual, it didn't show up.
>
> So I had a debian laptop with me , and I connected the hdd to that
> laptop and viola, dmseg showed this

:) BTW, did you create all those partitions, or is it an artifact of
having gone belly up?

Just make sure you don't repeat the procedure which got you into this
situation in the first place. :)

Give it a good test from a live CD *BEFORE* you partition or install
anything. I think there are special live CD's for this. Or use a live CD
with the appropriate software.

> Now my question is, What should I choose? Logical volume? or Primary Volume?
> I think I cannot create more than 4 primary volumes on a hdd, and
> since two are already
> occupied, I should create
> /home, /, /usr and /var all logical volumes. (Choice of partitions on
> the basis of discussions above)
>
> Will choosing logical volumes harm any performance on the hdd?

Don't confuse yourself. You're confusing volumes with partitions.

I thought you were going to look at LVM, and therefore as I understand
it, this won't be a concern?

--
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the
oppressing." --- Malcolm X
Joel Rees
2014-02-11 02:25:14 UTC
Permalink
On Mon, Feb 10, 2014 at 11:25 PM, Anubhav Yadav <***@gmail.com> wrote:
> [...]
> Windows was very bad again and it created two primary partitions,one 79 GB
> for the windows, and the other 86 mb for bootloader.

Funny thing about that. They're sort of imitating the Linux community
with that, except the Linux community is working to make our version
of the boot loader partition shareable between multiple OSses.

> So tomorrow, I will be installing debian on the remaining space.
>
> Now my question is, What should I choose? Logical volume? or Primary Volume?

The current way of doing things seems to be to have one primary volume
for the Linux boot partition and one logical volume for the remainder
of your partitions. But there are two kinds of logical volumes. One is
what we used to call a DOS (MS-DOS) extended or logical partition. The
other is the LVM you've heard quite a bit about in this thread.
(Actually, you could say there are more kinds than that, including ZFS
and such.)

> I think I cannot create more than 4 primary volumes on a hdd, and
> since two are already
> occupied, I should create
> /home, /, /usr and /var all logical volumes. (Choice of partitions on
> the basis of discussions above)

That's the concept. It is also possible, as has been mentioned, to
install the whole OS in one single partition. Probably. I don't really
think you want to do that, anyway.

> Will choosing logical volumes harm any performance on the hdd?

You're not going to get it right the first time. Or the second time.
And then you will take a different class at school, and what was right
for the last class might not be the best for the next one. That's one
reason you should get an external drive, so you can back your data up
easily and do it a different way, just to see what happens.

Two things you need to understand, you probably don't want to try this
without using some sort of "logical volume". I have had a system where
I had two remaining primary partitions, and I made one swap (1G) and
the other root (the rest of the disk). That was several years ago, and
doing such a thing now is actually harder, not easier, I think.

I think I need to explain those four "primary volumes". I was hoping I
could refer you to wikipedia, but the present article,

http://en.wikipedia.org/wiki/Disk_partitioning

has too much information on stuff that will be distracting from the
purpose here. (Microsfot's page on the topic seems to be just enough
information to help the sales crew, as usual.)

History: Back in the days of MS-DOS, whose evil spawn became the
unworthy standard we now have, in order to keep the disk layout simple
and leave as much as possible for data, DOS fixed the number of hard
disk partitions to four. Because of the elements of the spec that
became the effective standard, we are stuck with that number. These
are what I call primary DOS partitions in my better moments. I'm
pretty sure they are what you are calling primary volumes.

Back in the mid-80s (IIRC), Micorosoff admitted their sins and
grudgingly yielded one of those to be an extendable (or "extended", go
figure) partition. Nowadays it is often called a "logical volume" or
some other term that abuses the English.

(Back then, even among English-speaking computer geeks, there weren't
very many who really understood English. That's why so much of the
jargon is confusing. The situation has not improved, especially since
the sales crew seems to enjoy using the confusion to sell people
things they don't need.)

I'm going to call it the extended partitions. Only one of the four
primary partitions is allowed to be extended, you see.

Within that extended partition, you can cut a number of "logical
partitions". Quite a few, in fact. (I'm going to call them logical
partitions here.) So, what you have looks sort of like this:

DOS (physical) Partition C
DOS (physical) Partition D
DOS (physical) Partition E
DOS extended partition (no letter):
DOS (logical) Partition F
DOS (logical) Partition G
....

The extended partition does not have to be the last one, but there can
be only one.

Now, we don't have to follow this plan in non-DOS OSses. Mac OS did
not, but sort of does now, for compatibility. openBSD and netBSD, and,
I think, freeBSD can take over the whole drive with their own
partitioning system and completely ignore the DOS scheme. But then
it's not compatible with the Microsoft world, and Micorosfot-trained
techs who try to mount such a drive will see no partitions at all, and
think the drive is damaged or blank. And then they wipe the data or do
something else bad.

Linux, likewise, doesn't really have to do it this way. But we try to
maintain compatibility, for dual boot purposes, and to try to avoid
surprising Microsotf-trained techs.

Now, if you use a DOS extended partition, and use DOS logical
partitions, you don't get to re-size the partitions. (At least, if you
do re-size, you could easily lose data.) If you go this way, here's
how it might look:

DOS (primary) partition 1 (MS-Windows boot)
DOS (primary) partition 2 (MS-Windows C:)
DOS (primary) partition 3 (Debian boot, unrecognizable to MSWindows)
DOS (primary) partition 4 (extended, no letter):
DOS (logical) partition (MS-Windos D:)
DOS (logical) partition (Debian root, unrecognizable to MS^Windows)
DOS (logical) partition (Debian something, unrecognizable to MSWinnows)
....
DOS (logical) partition (Debian swap, unrecognizable to
MS-Wingdowns. I'm having problems typing today.)
un-allocated space! (to be later allocated by gparted and mounted,
say, under /home/music2 or such)

The BSDs tend to take a primary DOS (physical) partition and lay their
own partition labels down inside that. It's a little like DOS logical
partitions in a DOS extended partition, except different. So DOS
doesn't count such a partition as an extended partition, and the BSDs
can manage their stuff without worrying about doing things the DOS
way.

LVM is similar. You take a DOS partition and lay down an LVM label
inside it.Then, you cut the LVM partition up into logical volumes.
Last time I did this, my memory was that the LVM partition can now
itself reside in a DOS logical partition, but my memory may be wrong.
I suppose I should check. Anyway, one way it might look is like this:

DOS (primary) partition 1 (MS-Windows boot)
DOS (primary) partition 2 (MS-Windows C:)
DOS (primary) partition 3 (Debian boot, unrecognizable to MS-Windows)
DOS (primary) partition 4 (LVM physical volume, no letter,
unrecognizable to MS-Windows):
LVM logical volume (Debian root, unrecognizable to MS-Windows)
LVM logical volume (Debian something, unrecognizable to MS-Windows)
....
LVM logical volume (Debian swap, unrecognizable to MS-Windows.)
un-allocated space! (to be later allocated by LVM as appropriate)

And the advantage of LVM, as has been mentioned, is that you can take
that un-allocated space and just paste it onto an existing Linux
partition. if /var runs out of space (I've had this happen on a
version upgrade, several times.) all you have to do is use the LVM
tools to grab more of that un-allocated area and add it to the /var
partition.

It's not perfect. It fragments the disk, and you may see some speed
penalty. (I never did, but I didn't see how far I could push it. After
the third or fourth time I added space to /var and /usr, I backed-up
and re-partitioned the entire system.) On the other hand, it can be
used to implement RAID in a sane manner. (Which is definitely not a
speed penalty, if you do it right.) Okay, RAID is not really relevant
to a laptop, at least not for most of us.

You can shrink partitions with LVM, but shrinking is not perfect. So
you would prefer to leave space un-allocated, or maybe allocate it to
something you wouldn't mind just erasing. Like, if you keep a
collection of ripped CDs in your laptop, since you can always re-rip
them, you could just keep them all in one partition and, when you need
space, erase that partition for the space you need.

gparted does NOT work with LVM partitions. There is a graphical LVM
tool. It's a bit simplified and limited and a bit slow to use, but
it's probably plenty good enough to get you started.

> I have also decided to choose xfce as my primary desktop,
> unless I get more knowledgeable in using tiling window managers.

It's a good point to start from. A more bare-bones window manager that
doesn't handle the desktop metaphor for you is an interesting
adventure in itself, but I think you'll want a little more preparation
before you go there.

> Apart from all that I would like to sincerely thank each and everyone of
> you, I never felt that I was alone, always cheered up by you guys.
> No wonder hdd are cheap in some countries say 100-150 $, but here
> in this country, I just could not have afforded an hdd without getting
> some credit from others.

Yeah, for some people the cost of a new HD is a month's wages. For
others, an hour's wages. (But I'm trying very hard not to wander off
into politics here. Did I manage to skip the rant about Microsoft's
monopoly practices above?)

Anyway, it's great you got the HD back up, and are ready for more
adventures. ;-)

> [...]

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.
Chris Bannister
2014-02-08 01:57:43 UTC
Permalink
On Thu, Feb 06, 2014 at 09:50:21PM +1100, Scott Ferguson wrote:
> On 06/02/14 21:32, ***@neutralite.org wrote:
> > I meant an example of stuff which should be in / but are in fact in /usr.
>
> Sorry. I'm curious about that too.

http://www.lmgtfy.com/?q=%2Fusr+site%3Alists.debian.org%2Fdebian-devel

For a start:
https://lists.debian.org/debian-devel/2011/10/threads.html#00157

--
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the
oppressing." --- Malcolm X
Anubhav Yadav
2014-02-08 10:21:07 UTC
Permalink
Something really bad happened. I went to format all my partitions from
the windows 8 installation menu (Bootable usb) and when I went to
format boot partition of windows 100 mb,the installer hanged (typical
of windows) After waiting for say half an hour, I hard-rebooted the
laptop and it was stuck on the bios splash screen. I could do nothing,
could not enter bios,nor could enter the boot menu.

So I took my laptop to the Asus service center. They connected my
hardisk to a casing and the hardisk was not even detected on their
computers.
So they confidently said that your hardisk has been damaged and no
matter what you do, you cannot revive it. You can get a new hardisk
for blah blah money.

I am so badly screwed. After waiting for two months to get my laptop
back (motherboard replacement) I am so badly screwed. I so badly
wanted to prepare for gsoc, literally cried.

Thanks a lot everyone for taking out their precious time and replying
so beautifully to me. So thankful to everyone!
David Christensen
2014-02-08 21:01:11 UTC
Permalink
On 02/08/2014 02:21 AM, Anubhav Yadav wrote:
> Something really bad happened. ...

No problem. Part of the FOSS hobby is breaking your toy and then having
to fix it. :-)


First, looking back on this thread, I see a meta-problem. Please read
these (adjust for context):

http://www.emailreplies.com/

https://www.freebsd.org/doc/en/articles/freebsd-questions/


http://www.freebsd.org/doc/en/articles/mailing-list-faq/article.html#bikeshed


Have you done this?

On 02/05/2014 12:11 AM, David Christensen wrote:
> Most major hard drive manufacturers offer drive diagnostic tools.
> Download the tool for your brand of HDD and use it to check your
> drive. (I prefer a stand-alone bootable ISO image that I can burn to
> CD.)


David
David Christensen
2014-02-09 09:12:06 UTC
Permalink
On 02/08/2014 11:55 PM, Anubhav Yadav wrote:
> I wanted to partition the hdd,but now my hdd has been corrupt ...

My next step would be to download, burn, and run the HDD manufacturer's
diagnostic tool. Again, beware that bad HDD electronics can damage
whatever you plug the HDD into, so choose your Guinea pigs carefully:

- If the computer won't POST or the tool won't run with the HDD in an
external case/ dock, try another cable, case/ dock, and/or computer.
Try new and old computers and/or equipment. Think carefully before
trying an internal connection. If you can't get the tool to work after
trying many combinations, then the HDD electronics are toast. Get
another drive.

- If/ when you can run the tool, perform all available tests. Next,
regardless of the test results, wipe the entire drive (fill with zeros;
750 GB will take a long time). Use a bootable Linux CD or USB drive and
dd if the HDD diagnostic tool doesn't have this feature (some lack it).
Then run the tool and all the tests again. If the drive passes 100%,
you're in luck. If not, get another drive.


If you end up getting a non-new replacement drive, perform the above
steps on it.


Once you have a known good drive, plug it into your laptop and see what
happens. If it works, you're back to the partitioning question. If
not, the next step will be to trouble-shoot the laptop.


> I appreciate the fact that you are taking time and replying me. Means
> a lot. Thank you!

YW :-)


David
Celejar
2014-02-09 13:55:41 UTC
Permalink
On Sun, 09 Feb 2014 01:12:06 -0800
David Christensen <***@holgerdanske.com> wrote:

...

> diagnostic tool. Again, beware that bad HDD electronics can damage
> whatever you plug the HDD into, so choose your Guinea pigs carefully:

Never heard that before, but I'm certainly no expert on such things. Do
have further information on this?

Celejar
Lisi Reisz
2014-02-08 23:09:59 UTC
Permalink
On Saturday 08 February 2014 10:21:07 Anubhav Yadav wrote:
> Something really bad happened. I went to format all my partitions
> from the windows 8 installation menu (Bootable usb) and when I went
> to format boot partition of windows 100 mb,the installer hanged
> (typical of windows) After waiting for say half an hour, I
> hard-rebooted the laptop and it was stuck on the bios splash
> screen. I could do nothing, could not enter bios,nor could enter
> the boot menu.

I have a Windows free household, but I know well that not everyone is
so lucky. You have my full sympathy.

> So I took my laptop to the Asus service center. They connected my
> hardisk to a casing and the hardisk was not even detected on their
> computers.
> So they confidently said that your hardisk has been damaged and no
> matter what you do, you cannot revive it. You can get a new hardisk
> for blah blah money.

Don't give up yet. Have you a friend who could let you look at your
hard-drive yourself?

And, sorry to seem slow, but why would a messed up hard drive prevent
access to the BIOS? Now a faulty motherboard could, and it is not
unheard of for repairs to fail to function.

> I am so badly screwed. After waiting for two months to get my
> laptop back (motherboard replacement) I am so badly screwed. I so
> badly wanted to prepare for gsoc, literally cried.

Heartbreaking. I really feel for you. But don't give up yet.
Formulate a question or series of questions using the advice that
David gave you and see what this list can come up with. There are
those here who are quite simply magicians.

Nil desperandum.

Lisi
Anubhav Yadav
2014-02-09 05:53:14 UTC
Permalink
> Don't give up yet. Have you a friend who could let you look at your
> hard-drive yourself?

I have got a friend's laptop (Dell) right now in my room, so I am thinking of
removing my hdd and putting that hdd on the dell laptop and trying to
get into a linux boot using the hdd.

>
> And, sorry to seem slow, but why would a messed up hard drive prevent
> access to the BIOS? Now a faulty motherboard could, and it is not
> unheard of for repairs to fail to function.

If I remove the hdd and boot the laptop, I can go into the bios easily.

> Heartbreaking. I really feel for you. But don't give up yet.
> Formulate a question or series of questions using the advice that
> David gave you and see what this list can come up with. There are
> those here who are quite simply magicians.

Yes I agree, guys here can simply just do anything at all.
Truly ingenious! I will keep on reporting what I am doing.
Thanks
--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
Joel Rees
2014-02-10 11:25:52 UTC
Permalink
On Sun, Feb 9, 2014 at 8:09 AM, Lisi Reisz <***@gmail.com> wrote:
> [...]
> And, sorry to seem slow, but why would a messed up hard drive prevent
> access to the BIOS? Now a faulty motherboard could, and it is not
> unheard of for repairs to fail to function.

While there is a possibility that he's taken the covers off without
proper precautions against static electricity and fried the controller
in a way that say, ties an IRQ active or something, I'm guessing that
he's simply not used to the boot sequence and where it hangs is a bit
past the BIOS.

And, as we know about people who are trained to work on MSWindows,
they are trained not to fix anything that's too hard. Time is money,
etc.

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.
Schlacta, Christ
2014-02-08 23:18:56 UTC
Permalink
On Feb 8, 2014 12:26 PM, "Anubhav Yadav" <***@gmail.com> wrote:
>
>
> On 9 Feb 2014 01:41, "Schlacta, Christ" <***@aarcane.org> wrote:
> >
> > If it was a Windows system they connected your drive to, they probably
didn't actually give it a good checking . A new hard drive can be had for
between $50 and $150 depending on your needs. Also, try booting the system
from live usb and see what it says about the hard drive. Check dmesg, and
look for /dev/sda and /dev/sdb . If you find either of those nodes, run
smart tests on them. The partition table on your drive is probably just
messed up, and that's a much easier fix than a new hard drive.
>
> The only problem is when I boot my laptop with hardisk in it, it stucks
on the splash screen and I cannot boot into any live Linux distro. Cannot
even enter BIOS

What happens if you post the laptop, then engage the hard drive after you
boot into a Linux live usb?
Anubhav Yadav
2014-02-09 05:54:54 UTC
Permalink
> What happens if you post the laptop, then engage the hard drive after you
> boot into a Linux live usb?

You mean to say, I should remove my hdd, boot my laptop using a usb,
and then connect the hdd live into the laptop?

Can it further damage the hdd? I should give this a try!

--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
Chris Bannister
2014-02-09 10:57:34 UTC
Permalink
On Sun, Feb 09, 2014 at 11:24:54AM +0530, Anubhav Yadav wrote:
> > What happens if you post the laptop, then engage the hard drive after you
> > boot into a Linux live usb?
>
> You mean to say, I should remove my hdd, boot my laptop using a usb,
> and then connect the hdd live into the laptop?
>
> Can it further damage the hdd? I should give this a try!

Don't mess with your laptop with the power still on. I wouldn't
recommend plugging in a HDD with the power on!


--
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the
oppressing." --- Malcolm X
David Christensen
2014-02-09 06:26:58 UTC
Permalink
On 02/08/2014 09:54 PM, Anubhav Yadav wrote:
> You mean to say, I should remove my hdd, boot my laptop using a usb,
> and then connect the hdd live into the laptop?
> Can it further damage the hdd?

YES!

And, damage the motherboard.


> I should give this a try!

NO!


If you want to hot-plug hard drives, everything has to be designed to do
that. Typically, that means servers with RAID cards, HDD backplane/
cages, and rated HDD's.


David
Anubhav Yadav
2014-02-09 06:29:58 UTC
Permalink
> YES!
>
> And, damage the motherboard.

> NO!

I was waiting for someone to reply. Thanks!
Now, If I remove my hdd and plug it in the dell
laptop and then boot the laptop? Can I try that?
And then maybe connect a live usb to that?
Will it work?

--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
David Christensen
2014-02-09 08:00:42 UTC
Permalink
On 02/08/2014 10:29 PM, Anubhav Yadav wrote:
> Now, If I remove my hdd and plug it in the dell
> laptop and then boot the laptop?

Understand that if the HDD has bad electronics, anything you plug it
into could be damaged.


I always keep an old desktop machine around as my "workbench". If I
smoke a HDD port, the motherboard, or a whole machine, that would be
inconvenient, but not a show-stopper. A workbench, a cache of used/
spare parts, and a few of the right tools make trouble-shooting far easier.


Do you have an anti-static wrist strap and spare anti-static bags?


David
Anubhav Yadav
2014-02-09 08:08:00 UTC
Permalink
> Understand that if the HDD has bad electronics, anything you plug it into
> could be damaged.

> Do you have an anti-static wrist strap and spare anti-static bags?

No, But I got myself a meeting with my friend who runs a laptop repairing shop,
who is willing to give me access to his hdd casing.
I will take my girlfriend's dell laptop, I have already installed debian in it.

Hopefully I manage to get the hdd repartitioned. Will keep everyone posted.
Schlacta, Christ
2014-02-09 07:57:24 UTC
Permalink
On Feb 8, 2014 10:27 PM, "David Christensen" <***@holgerdanske.com>
wrote:
>
> On 02/08/2014 09:54 PM, Anubhav Yadav wrote:
>>
>> You mean to say, I should remove my hdd, boot my laptop using a usb,
>> and then connect the hdd live into the laptop?
>> Can it further damage the hdd?
>
>
> YES!
>
> And, damage the motherboard.

No.

Sata supports hotplugging. The worst that will happen is some controllers
will not recognize devices after hotplugging due to lack of firmware
configured detect devices after an initial post bus scan.
>
>
>
>> I should give this a try!
>
>
> NO!

It depends. If your hard drive is externally accessible it can't hurt
anything. If your hard drive is under panels that also protect ram or any
other pcbs, you generally shouldn't run system with those covers missing.
>
>
> If you want to hot-plug hard drives, everything has to be designed to do
that. Typically, that means servers with RAID cards, HDD backplane/ cages,
and rated HDD's.

Completely wrong. Wikipedia has it in fairly simple words:
http://en.wikipedia.org/wiki/Serial_ATA#Hotplug

Also the first twenty or so articles on Google all confirm at a glance that
hotswap is supported by sata intrinsically, and any problems are the result
of software or firmware bugs failing to trigger hotswap events properly.

Please to not spread bad or wrong information.

>
>
> David
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-***@lists.debian.org with a
subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
> Archive: http://lists.debian.org/***@holgerdanske.com
>
David Christensen
2014-02-09 08:25:42 UTC
Permalink
On 02/08/2014 11:57 PM, Schlacta, Christ wrote:
> ... panels that also protect ram or any other pcbs, you generally
> shouldn't run system with those covers missing.

+1


> Wikipedia has it in fairly simple words:
> http://en.wikipedia.org/wiki/Serial_ATA#Hotplug

"The Serial ATA Spec includes logic for SATA device hotplugging.
Devices and motherboards that meet the interoperability
specification are capable of hot plugging.

Have you checked the OP's hard drive and friend's laptop for SATA
hot-plug interoperability specification compliance?


David
Anubhav Yadav
2014-02-09 09:05:39 UTC
Permalink
> Have you checked the OP's hard drive and friend's laptop for SATA hot-plug
> interoperability specification compliance?

Although I am a CS student, I should better wait, and not put my friend's hdd
in jeopardy. I will report what happens when I get my hands on the
casing and connect
the broken hdd to a debian box.

--
Regards,
Anubhav Yadav
Imperial College of Engineering and Research,
Pune.
Jerry Stuckle
2014-02-09 15:52:13 UTC
Permalink
On 2/9/2014 2:57 AM, Schlacta, Christ wrote:
>
> On Feb 8, 2014 10:27 PM, "David Christensen" <***@holgerdanske.com
> <mailto:***@holgerdanske.com>> wrote:
> >
> > On 02/08/2014 09:54 PM, Anubhav Yadav wrote:
> >>
> >> You mean to say, I should remove my hdd, boot my laptop using a usb,
> >> and then connect the hdd live into the laptop?
> >> Can it further damage the hdd?
> >
> >
> > YES!
> >
> > And, damage the motherboard.
>
> No.
>

Incorrect.

> Sata supports hotplugging. The worst that will happen is some
> controllers will not recognize devices after hotplugging due to lack of
> firmware configured detect devices after an initial post bus scan.

Only if the device and controller fully support hotplugging (surprise -
some DO NOT!) AND the drive is working properly. Clearly the drive is
NOT working properly.

> >
> >
> >
> >> I should give this a try!
> >
> >
> > NO!
>
> It depends. If your hard drive is externally accessible it can't hurt
> anything. If your hard drive is under panels that also protect ram or
> any other pcbs, you generally shouldn't run system with those covers
> missing.

Again, incorrect. If the drive is damaged electronically, ANYTHING it
is plugged into can be damaged. And if either the drive or the
controller do not support SATA hotplugging, either (or both) can be
damaged by hotplugging the device.

> >
> >
> > If you want to hot-plug hard drives, everything has to be designed to
> do that. Typically, that means servers with RAID cards, HDD backplane/
> cages, and rated HDD's.
>
> Completely wrong. Wikipedia has it in fairly simple words:
> http://en.wikipedia.org/wiki/Serial_ATA#Hotplug
>

Again see above.

> Also the first twenty or so articles on Google all confirm at a glance
> that hotswap is supported by sata intrinsically, and any problems are
> the result of software or firmware bugs failing to trigger hotswap
> events properly.
>
> Please to not spread bad or wrong information.
>

Yes, please do not do so.

Do you believe EVERYTHING you read on the internet?

Jerry
Roger Leigh
2014-02-09 21:05:44 UTC
Permalink
On Wed, Feb 05, 2014 at 08:27:15AM -0800, David Guntner wrote:
>
> It's not just a matter of capacity. I've got a 1TB drive, and I still
> partition them into separate sections:
>
> > $ df -k
> > Filesystem 1K-blocks Used Available Use% Mounted on
> > rootfs 1818872 299704 1426704 18% /
> > udev 10240 0 10240 0% /dev
> > tmpfs 309540 12812 296728 5% /run
> > /dev/disk/by-uuid/36f6b922-0e9a-4ce5-aeee-c92104fa2428 1818872 299704 1426704 18% /
> > tmpfs 5120 4 5116 1% /run/lock
> > tmpfs 1049560 0 1049560 0% /run/shm
> > /dev/sda1 137221 20211 109689 16% /boot
> > /dev/sda12 67284600 16339432 47527264 26% /home
> > /dev/sdb1 307665016 40081124 251955400 14% /backup
> > /dev/sda9 28835836 351612 27019444 2% /opt
> > /dev/sda6 2882592 69908 2666252 3% /tmp
> > /dev/sda7 28835836 7400256 19970800 28% /usr
> > /dev/sda8 48060296 15360908 30258020 34% /usr/local
> > /dev/sda10 28835836 1455184 25915872 6% /var
> > /dev/sda11 28835836 179364 27191692 1% /var/spool

While this works, it's suboptimal for a number of reasons, primarily
being inflexible and space wasting. It's inflexible because should
your needs change (e.g. you run out of space on /opt or /var), you
can't do anything about it other than hairy repartitioning involving
backup and restore of the data. It's also vastly more complex than
it really needs to be.

I've been bitten by this in the past. Firstly, when my fixed size
/boot prevented kernel upgrades because two images and initrds would
no longer fit (and the sizes keep getting bigger). Secondly, when
the rootfs got too small and wouldn't allow package install/upgrade
despite having gigabytes of space on /usr; there's no need to
separate them, and it's much less likely to cause problems if they
are together. So you're caught between two bad situations:
preallocating sufficient space that you won't be caught out by
size requirements increasing over time, and overallocation of space
which is then wasted pointlessly.

On Linux, there are three possibilities which mitigate all these
things:

1) Use LVM. You can use the entire drive as a single physical volume
(PV) and then carve it up into separate logical volumes (LVs). This
allows exactly the same strategy as above, but you can start with
the minimum needed size for each partition and leave the remaining
space unallocated. Should you need additional space for any of the
volumes, you can just extend it on demand. Downside: space allocation
is manual and some degree of space wastage still occurs.

2) Use Btrfs. You can have a single Btrfs volume, and then use
subvolumes for all the separate parts, divided up exactly as above.
The subvolumes may be independently snapshotted, backed up and
preserved. The rootfs itself can be a subvolume. The main problem
here is that Btrfs isn't production ready, so I can't recommend it
unless you don't care about your data.

3) Use ZFS. Allocate the drive as a single zpool. You can then create
zfs volumes for all the separate bits. However, you don't have the
space wastage issues since all the data is in a single pool, and
you can adjust the size allocations/quotas on demand for each
individual volume (or leave them unset to give them as much space as
they can get). Needs a kernel patch for the zfs driver. With
kFreeBSD you can do this natively. It has all sorts of great
features which I won't go into here.

I've tried all three. For Linux, using LVM is easy and can be done
in the installer. If you reinstall you can keep the LVs you want and
wipe/delete the rest. For kFreeBSD, you can install directly onto ZFS;
I've been using it for kFreeBSD and native FreeBSD installs, and it's
the best of the lot--hopefully Debian can offer native support for
Linux at some point [currently needs patching, and the patches don't
work with current 3.12 kernels]


Regards,
Roger

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Doug
2014-02-09 21:23:27 UTC
Permalink
On 02/09/2014 04:05 PM, Roger Leigh wrote:

/snip/
> On W
> On Linux, there are three possibilities which mitigate all these
> things:
>
> 1) Use LVM. You can use the entire drive as a single physical volume
> (PV) and then carve it up into separate logical volumes (LVs). This
> allows exactly the same strategy as above, but you can start with
> the minimum needed size for each partition and leave the remaining
> space unallocated. Should you need additional space for any of the
> volumes, you can just extend it on demand. Downside: space allocation
> is manual and some degree of space wastage still occurs.
>
> 2) Use Btrfs. You can have a single Btrfs volume, and then use
> subvolumes for all the separate parts, divided up exactly as above.
> The subvolumes may be independently snapshotted, backed up and
> preserved. The rootfs itself can be a subvolume. The main problem
> here is that Btrfs isn't production ready, so I can't recommend it
> unless you don't care about your data.
>
> 3) Use ZFS. Allocate the drive as a single zpool. You can then create
> zfs volumes for all the separate bits. However, you don't have the
> space wastage issues since all the data is in a single pool, and
> you can adjust the size allocations/quotas on demand for each
> individual volume (or leave them unset to give them as much space as
> they can get). Needs a kernel patch for the zfs driver. With
> kFreeBSD you can do this natively. It has all sorts of great
> features which I won't go into here.
>
> I've tried all three. For Linux, using LVM is easy and can be done
> in the installer. If you reinstall you can keep the LVs you want and
> wipe/delete the rest.
/snip/
>
> Regards,
> Roger
>
I don't understand LVM, but I tried to install some distro just to
learn about it, and it would only install using LVM, which meant
that it would only install on the entire hard drive. No partitions,
no Windows, no nothing. I installed it on a second small h/d, and
then I found out that nothing on it was accessible from a normal
Linux installed on a normal file system on sda. If LVM becomes
the Linux standard, I will have to find a different OS!

--doug
Joe
2014-02-09 22:04:09 UTC
Permalink
On Sun, 09 Feb 2014 16:23:27 -0500
Doug <***@optonline.net> wrote:


> >
> I don't understand LVM, but I tried to install some distro just to
> learn about it, and it would only install using LVM, which meant
> that it would only install on the entire hard drive. No partitions,
> no Windows, no nothing. I installed it on a second small h/d, and
> then I found out that nothing on it was accessible from a normal
> Linux installed on a normal file system on sda. If LVM becomes
> the Linux standard, I will have to find a different OS!
>

Sounds like a bee-in-the-bonnet distro. Normally, LVM volumes map to
partitions, and as long as you have the LVM packages installed on any
Linux system, it will be able to read LVM systems.

As you see here, my main workstation has an LVM partition and a normal
one. I have a bit of a bee in my bonnet about grub, having had several
run-ins with it on various machines, and I trust it about as far as I
can throw the average office building. Hence the separate /boot
partition. Grub does understand LVM natively, but if one day it decides
to play dumb, it is more accessible on its own partition, and can be
more easily held to account with the software equivalent of an axe.

Device Boot Start End Blocks Id System
/dev/sda1 * 63 979964 489951 83 Linux
/dev/sda2 979965 625137344 312078690 8e Linux LVM

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 489900 79644 410256 17% /boot
/dev/mapper/first-root 4882276 1308524 3573752 27% /
/dev/mapper/first-backup 97649748 4991160 92658588 6% /backup
/dev/mapper/first-home 52427196 27603080 24824116 53% /home
/dev/mapper/first-tmp 4882276 32860 4849416 1% /tmp
/dev/mapper/first-usr 19529128 8819648 10709480 46% /usr
/dev/mapper/first-var 9764560 1261076 8503484 13% /var

In the context of the actual topic here, I've already said that I don't
think multiple partitions are all that useful on a workstation, so I'm
not necessarily advocating this particular scheme.

--
Joe
Doug
2014-02-09 22:24:41 UTC
Permalink
On 02/09/2014 05:04 PM, Joe wrote:
> On Sun, 09 Feb 2014 16:23:27 -0500
> Doug <***@optonline.net> wrote:
>
>
>> I don't understand LVM, but I tried to install some distro just to
>> learn about it, and it would only install using LVM, which meant
>> that it would only install on the entire hard drive. No partitions,
>> no Windows, no nothing. I installed it on a second small h/d, and
>> then I found out that nothing on it was accessible from a normal
>> Linux installed on a normal file system on sda. If LVM becomes
>> the Linux standard, I will have to find a different OS!
>>
> Sounds like a bee-in-the-bonnet distro. Normally, LVM volumes map to
> partitions, and as long as you have the LVM packages installed on any
> Linux system, it will be able to read LVM systems.
>
> As you see here, my main workstation has an LVM partition and a normal
> one. I have a bit of a bee in my bonnet about grub, having had several
> run-ins with it on various machines, and I trust it about as far as I
> can throw the average office building. Hence the separate /boot
> partition. Grub does understand LVM natively, but if one day it decides
> to play dumb, it is more accessible on its own partition, and can be
> more easily held to account with the software equivalent of an axe.
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 63 979964 489951 83 Linux
> /dev/sda2 979965 625137344 312078690 8e Linux LVM
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/sda1 489900 79644 410256 17% /boot
> /dev/mapper/first-root 4882276 1308524 3573752 27% /
> /dev/mapper/first-backup 97649748 4991160 92658588 6% /backup
> /dev/mapper/first-home 52427196 27603080 24824116 53% /home
> /dev/mapper/first-tmp 4882276 32860 4849416 1% /tmp
> /dev/mapper/first-usr 19529128 8819648 10709480 46% /usr
> /dev/mapper/first-var 9764560 1261076 8503484 13% /var
>
> In the context of the actual topic here, I've already said that I don't
> think multiple partitions are all that useful on a workstation, so I'm
> not necessarily advocating this particular scheme.
>
It was not a weird distro. I don't remember whether it was
RedHat, or SUSE or Fedora, but I'm pretty sure it was one of them.
I found that there is something called lvm2 in my repos, and I
installed it; I don't remember if I still have the other distro on
the second drive or if I blew it away (probable!). But it did
*not* respect available partitions--it wanted the whole
verdammt drive!

I usually use a distro that uses classic grub, and I've never had
a problem with it. I can even boot other distros installed on the
drive from grub.

I remember seeing you or someone writing that multiple partitions
are not useful. I respectfully disagree. Unless someone is storing a
humongous amount of files on their system, there should be lots of
space available on a 1TB drive for Windows and two or three other
systems. (After losing a slew of music downloads after a drive failure,
I no longer store anything like that only on a drive; I copy the files
to a CD. Unfortunately, all those downloads were from the
free system that is no longer available, and so most of the songs
aren't either.)

--doug
Joe
2014-02-10 10:16:25 UTC
Permalink
On Sun, 09 Feb 2014 17:24:41 -0500
Doug <***@optonline.net> wrote:


>
> I remember seeing you or someone writing that multiple partitions
> are not useful. I respectfully disagree. Unless someone is storing a
> humongous amount of files on their system, there should be lots of
> space available on a 1TB drive for Windows and two or three other
> systems.

Yes, no argument there, the question wasn't about total space or
multi-booting, but whether multiple partitions within a single Linux
filesystem were advisable, and if so, which directories and how big.

My contention was that it was vital on a server (and I normally have a
spare one-partition installation of the same OS version on a server, as
well as several partitions within the main OS) but of limited use on a
workstation, and not worth the inevitable wrong guesses about future
needs.

--
Joe
Joel Rees
2014-02-10 11:46:37 UTC
Permalink
On Mon, Feb 10, 2014 at 6:23 AM, Doug <***@optonline.net> wrote:
> [...]
> I don't understand LVM, but I tried to install some distro just to
> learn about it, and it would only install using LVM, which meant
> that it would only install on the entire hard drive. No partitions,
> no Windows, no nothing. I installed it on a second small h/d, and
> then I found out that nothing on it was accessible from a normal
> Linux installed on a normal file system on sda. If LVM becomes
> the Linux standard, I will have to find a different OS!
>
> --doug

Odd. I've used LVM on plain Fedora and Debian (and I think Ubuntu)
installs for going on ten years, now. Very useful, although the tools
are a bit counter-intuitive if you're used to DOS-extended partitions.

If you installed the LVM package in the second "normal" Linux, it
should have been easily accessible. You might want to use the GUI
tools at first, so that you don't damage your brain trying to figure
out good defaults for the command-line tools. Use the command-line
tools to see what the GUI tools did after they've done their job.

And remember you have to have a physical volume and then cut the
physical volume into logical volumes, and if you only have one hard
drive that is dedicated to LVM, you should only need one each of the
physical and logical volume groups. (I've tended to think of the
volume groups as LVM equivalent of RAID, but that's an
over-simplification.)

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.
Roger Leigh
2014-02-10 12:06:29 UTC
Permalink
On Sun, Feb 09, 2014 at 04:23:27PM -0500, Doug wrote:
> On 02/09/2014 04:05 PM, Roger Leigh wrote:
>
> /snip/
> > On W
> > On Linux, there are three possibilities which mitigate all these
> > things:
> >
> > 1) Use LVM. You can use the entire drive as a single physical volume
> > (PV) and then carve it up into separate logical volumes (LVs). This
> > allows exactly the same strategy as above, but you can start with
> > the minimum needed size for each partition and leave the remaining
> > space unallocated. Should you need additional space for any of the
> > volumes, you can just extend it on demand. Downside: space allocation
> > is manual and some degree of space wastage still occurs.
> >
> > 2) Use Btrfs. You can have a single Btrfs volume, and then use
> > subvolumes for all the separate parts, divided up exactly as above.
> > The subvolumes may be independently snapshotted, backed up and
> > preserved. The rootfs itself can be a subvolume. The main problem
> > here is that Btrfs isn't production ready, so I can't recommend it
> > unless you don't care about your data.
> >
> > 3) Use ZFS. Allocate the drive as a single zpool. You can then create
> > zfs volumes for all the separate bits. However, you don't have the
> > space wastage issues since all the data is in a single pool, and
> > you can adjust the size allocations/quotas on demand for each
> > individual volume (or leave them unset to give them as much space as
> > they can get). Needs a kernel patch for the zfs driver. With
> > kFreeBSD you can do this natively. It has all sorts of great
> > features which I won't go into here.
> >
> > I've tried all three. For Linux, using LVM is easy and can be done
> > in the installer. If you reinstall you can keep the LVs you want and
> > wipe/delete the rest.
> /snip/
> >
> > Regards,
> > Roger
> >
> I don't understand LVM, but I tried to install some distro just to
> learn about it, and it would only install using LVM, which meant
> that it would only install on the entire hard drive. No partitions,
> no Windows, no nothing.

While it's possible to use LVM in this manner, it's definitely not
typical. The Physical Volumes are block devices, and as such may be
made up of entire discs, but are typically a single partition on one
disc, or spanned over multiple discs, or on top of MD RAID arrays.
If you install on LVM in the Debian Installer, you can do any of the
above; it certainly won't install on an entire disc by default. You
normally partition and mark certain partitions as PVs for LVM.

I would certainly recommend taking a second look into LVM. You might
be surprised at how much of an improvement it is once you've got over
the initial learning curve. Just think of it as being similar to
traditional partitions, but with the ability to dynamically
reconfigure things on the fly, e.g. online resizing of filesystems.

Just as an example, I've shown below what a simple ZFS setup can look
like. It's a FreeBSD10 fileserver, but it's basically the same on
Linux. I'd show a kFreeBSD example but the system isn't online at
present. Here there are two pools, a single disk with the rootfs, and
a RAID1 array with the user data. Note the scrub status for the
mirror--this is giving us an additional level of error identification+
repair which no other filesystem offers [except btrfs, but you wouldn't
want to use it as previously mentioned]. All the individual datasets
(equivalent to a partition), are separately mounted but the storage is
all taken from the pool they belong to, so the space wastage from
partitioning is eliminated entirely.

Just to comment on the use of a whole disk, note here the pools are all
made of individual partitions, but could be whole discs if desired,
just like for LVM PVs.

% zpool status
pool: bsdroot
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
bsdroot ONLINE 0 0 0
gptid/7173ee99-882f-11e3-88e2-38eaa7ab6153 ONLINE 0 0 0

errors: No known data errors

pool: green
state: ONLINE
scan: scrub repaired 0 in 13h44m with 0 errors on Mon Feb 10 11:33:10 2014
config:

NAME STATE READ WRITE CKSUM
green ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
diskid/DISK-WD-WMAZA3538113p2 ONLINE 0 0 0
diskid/DISK-WD-WMAZA3538412p2 ONLINE 0 0 0

errors: No known data errors

% zfs list
NAME USED AVAIL REFER MOUNTPOINT
bsdroot 6.37G 210G 144K none
bsdroot/ROOT 2.09G 210G 144K none
bsdroot/ROOT/default 2.09G 210G 2.09G /
bsdroot/tmp 1014M 210G 1014M /tmp
bsdroot/usr 3.04G 210G 144K /usr
bsdroot/usr/home 184K 210G 184K /usr/home
bsdroot/usr/ports 2.51G 210G 2.51G /usr/ports
bsdroot/usr/src 545M 210G 545M /usr/src
bsdroot/var 255M 210G 254M /var
bsdroot/var/crash 148K 210G 148K /var/crash
bsdroot/var/log 472K 210G 472K /var/log
bsdroot/var/mail 152K 210G 152K /var/mail
bsdroot/var/tmp 160K 210G 160K /var/tmp
green 1.08T 723G 144K none
green/export 964G 723G 168K /export
green/export/data 964G 723G 152K /export/data
green/export/data/original 964G 723G 964G /export/data/original
green/home 139G 723G 152K /export/home
green/home/rleigh 139G 723G 138G /export/home/rleigh
green/jail 296K 723G 152K /jail
green/jail/template 144K 723G 144K /jail/template
green/mirror 296K 723G 152K /export/mirror
green/mirror/debian 144K 723G 144K /export/mirror/debian
green/sid 315M 723G 315M none
green/sidclone 315M 723G 315M none

% mount
bsdroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
green/export on /export (zfs, NFS exported, local, nfsv4acls)
green/export/data on /export/data (zfs, NFS exported, local, nfsv4acls)
green/export/data/original on /export/data/original (zfs, NFS exported, local, nfsv4acls)
green/home on /export/home (zfs, NFS exported, local, nfsv4acls)
green/home/rleigh on /export/home/rleigh (zfs, NFS exported, local, nfsv4acls)
green/jail on /jail (zfs, NFS exported, local, nfsv4acls)
green/jail/template on /jail/template (zfs, NFS exported, local, nfsv4acls)
green/mirror on /export/mirror (zfs, NFS exported, local, nfsv4acls)
green/mirror/debian on /export/mirror/debian (zfs, NFS exported, local, nfsv4acls)
bsdroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
bsdroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
bsdroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
bsdroot/usr/src on /usr/src (zfs, local, noatime, noexec, nosuid, nfsv4acls)
bsdroot/var on /var (zfs, local, noatime, nfsv4acls)
bsdroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
bsdroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
bsdroot/var/mail on /var/mail (zfs, local, nfsv4acls)
bsdroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Anubhav Yadav
2014-02-10 17:16:59 UTC
Permalink
>From the three possible choice you have enlisted me, I think lvm seems
to be the best option for me, '
so that I can give very little allocation to /var or /temp, and I can
later expand these logical volumes whenever
required?

A couple of questions:

1) Suppose I give more allocation to /var and later find out that I
require more space for /home partition,
can I shrink my /temp partition and increase my /home partition?

2) I need to find how to increase the partitions in LVM, can they be
easily done using gparted?
Roger Leigh
2014-02-10 20:36:02 UTC
Permalink
On Mon, Feb 10, 2014 at 10:46:59PM +0530, Anubhav Yadav wrote:
> >From the three possible choice you have enlisted me, I think lvm seems
> to be the best option for me, '
> so that I can give very little allocation to /var or /temp, and I can
> later expand these logical volumes whenever
> required?
>
> A couple of questions:
>
> 1) Suppose I give more allocation to /var and later find out that I
> require more space for /home partition,
> can I shrink my /temp partition and increase my /home partition?

This depends upon whether the filesystem supports shrinking and if
shrinking is possible. I've never tried it.

Normally you would just not allocate the space until you really
need it. Example:

ravenclaw# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb4 ravenclaw lvm2 a-- 256.00g 207.51g
ravenclaw# vgs
VG #PV #LV #SN Attr VSize VFree
ravenclaw 1 4 0 wz--n- 256.00g 207.51g
ravenclaw# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
btrsnap ravenclaw -wi-a----- 6.00g
chroots ravenclaw -wi-ao---- 12.00g
swap ravenclaw -wi-ao---- 14.90g
var ravenclaw -wi-ao---- 15.59g

% mount | grep ravenclaw
/dev/mapper/ravenclaw-var on /var type ext4 (rw,nodev,relatime,data=ordered)
/dev/mapper/ravenclaw-chroots on /srv/chroots type ext4 (rw,relatime,data=ordered)

So here I have 50GiB allocated to four logical volumes, and
over 200GiB unallocated. I can create new LVs, make snapshots
and new LVs with this free space. In terms of disc partitions,
there's just one (/dev/sdb4).

Basically, I'm recommending that you allocate space for just
what you need, plus a little spare, and then you won't need
to shrink anything to free up space--it will just be available.
If you need space for anything special-purpose, just create
an LV for it, and when you're done you can delete it and free
up the space for something else.

> 2) I need to find how to increase the partitions in LVM, can they be
> easily done using gparted?

No. The partitions, as in real disc partitions, don't change size
once it's set up. Use lvextend to increase the size of a logical
volume. Here's an example extending one of the above LVs from 12
to 13GiB:

ravenclaw# df /srv/chroots
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ravenclaw-chroots 12254384 4632220 6976636 40% /srv/chroots

ravenclaw# lvextend --size +1G --resizefs /dev/ravenclaw/chroots
Extending logical volume chroots to 13.00 GiB
Logical volume chroots successfully resized
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/mapper/ravenclaw-chroots is mounted on /srv/chroots; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/ravenclaw-chroots is now 3407872 blocks long.

ravenclaw# df /srv/chroots
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ravenclaw-chroots 13286512 4632220 7966824 37% /srv/chroots

So you can see the size increased by 1GiB and we didn't even have
to unmount the filesystem--it all happened online with no
interruption in service.


Regards,
Roger

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Anubhav Yadav
2014-02-10 20:50:29 UTC
Permalink
I am reading a lot on LVM, and it seems that I have a small doubt. I
found this on tldp HOWTO on LVM:

> root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that >kernel will be unbootable. Newer distributions support lvm in their mkinitrd scripts as well as their packaged initrd images, so this >becomes less of an issue over time.

I am not able to understand this but will it affect me on debian?

Also while I am on the partitioner page on the debian-installer, there
is one choice like:
"Guided: Use entire disc and setup as LVM"

Now since I am doing a dual boot and windows is already installed, I
feel I cannot use this option
as it will wipe the partition used by windows.

Can I set LVM using some other method?


> ravenclaw# lvextend --size +1G --resizefs /dev/ravenclaw/chroots
> Extending logical volume chroots to 13.00 GiB
> Logical volume chroots successfully resized
> resize2fs 1.42.9 (4-Feb-2014)
> Filesystem at /dev/mapper/ravenclaw-chroots is mounted on /srv/chroots; on-line resizing required
> old_desc_blocks = 1, new_desc_blocks = 1
> The filesystem on /dev/mapper/ravenclaw-chroots is now 3407872 blocks long.
>
> ravenclaw# df /srv/chroots
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/mapper/ravenclaw-chroots 13286512 4632220 7966824 37% /srv/chroots
>
> So you can see the size increased by 1GiB and we didn't even have
> to unmount the filesystem--it all happened online with no
> interruption in service.

Is there a simple gui tool available for the same?
Roger Leigh
2014-02-10 22:16:57 UTC
Permalink
On Tue, Feb 11, 2014 at 02:20:29AM +0530, Anubhav Yadav wrote:
> I am reading a lot on LVM, and it seems that I have a small doubt. I
> found this on tldp HOWTO on LVM:
>
> > root on LVM requires an initrd image that activates the root LV. If a kernel is upgraded without building the necessary initrd image, that >kernel will be unbootable. Newer distributions support lvm in their mkinitrd scripts as well as their packaged initrd images, so this >becomes less of an issue over time.
>
> I am not able to understand this but will it affect me on debian?

No. You can boot directly from an LVM LV, and the installer will
set up a suitable initrd for you and configure grub to load the
kernel and initrd from LVM. As the above says, new distributions
support it properly.

If you have a complex setup (e.g. on RAID5), you might need a
separate /boot. If it's a single disc, it will work just fine.
If you're concerned, just create a separate /boot; it's not
necessary, but it doesn't hurt.

> Also while I am on the partitioner page on the debian-installer, there
> is one choice like:
> "Guided: Use entire disc and setup as LVM"
>
> Now since I am doing a dual boot and windows is already installed, I
> feel I cannot use this option
> as it will wipe the partition used by windows.
>
> Can I set LVM using some other method?

Yes. Partition manually, and create a partition to hold *all* your
Linux data. However, rather than selecting a filesystem type, you
choose "Physical volume for LVM", and then configure LVM. It will
then let you use this partition for a new LVM volume group. Once
that's done you can add as many logical volumes as you like, and
assign mountpoints to them, make swapspace etc, and then continue
with the install as usual.

> > ravenclaw# lvextend --size +1G --resizefs /dev/ravenclaw/chroots
> Is there a simple gui tool available for the same?

I have no idea myself. I've always just used the tools directly.


Regards,
Roger

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Anubhav Yadav
2014-02-11 04:36:56 UTC
Permalink
> Yes. Partition manually, and create a partition to hold *all* your
> Linux data. However, rather than selecting a filesystem type, you
> choose "Physical volume for LVM", and then configure LVM. It will
> then let you use this partition for a new LVM volume group. Once
> that's done you can add as many logical volumes as you like, and
> assign mountpoints to them, make swapspace etc, and then continue
> with the install as usual.

I have 685 GB free space. I choose 585 GB as my primary partition and
set it as LVM.
Should I keep the bootable flag ON or OFF for this partition.

Also After that how can I create sub-volume? Can't find a way to do
that in the installer!
Anubhav Yadav
2014-02-11 04:45:20 UTC
Permalink
Okay I figured out how to make more partitions on LVM, just want to
make sure if the bootable flag on the LVM should be ON or OFF
Joel Rees
2014-02-11 08:02:26 UTC
Permalink
On Tue, Feb 11, 2014 at 1:45 PM, Anubhav Yadav <***@gmail.com> wrote:
> Okay I figured out how to make more partitions on LVM, just want to
> make sure if the bootable flag on the LVM should be ON or OFF

I don't think LVM partitions can be booted at this point in time
without having more fun than you thought you wanted.

Only one partition should have its boot flag set. gparted will unset
all the rest for you.

The one that should be set bootable is the one where you install grub
and the boot kernel, the one we call /boot because it's usually
mounted there. If you haven't made that one, you probably need to go
back and make it. And the installer should complain if you don't.

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.
Anubhav Yadav
2014-02-11 15:41:50 UTC
Permalink
Okay, I installed debian now, a perfect install with LVM.
Here is the output of df -H

Filesystem Size Used Avail Use% Mounted on
rootfs 8.7G 330M 7.9G 5% /
udev 11M 0 11M 0% /dev
tmpfs 830M 754k 830M 1% /run
/dev/mapper/Debian-Root 8.7G 330M 7.9G 5% /
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 1.7G 0 1.7G 0% /run/shm
/dev/mapper/Debian-Home 466G 427M 442G 1% /home
/dev/mapper/Debian-tmp 16G 174M 15G 2% /tmp
/dev/mapper/Debian-Usr 32G 2.8G 28G 10% /usr
/dev/mapper/Debian-Var 32G 461M 30G 2% /var

And this is what lvsan says
ACTIVE '/dev/Debian/Root' [8.19 GiB] inherit
ACTIVE '/dev/Debian/Usr' [29.80 GiB] inherit
ACTIVE '/dev/Debian/Var' [29.80 GiB] inherit
ACTIVE '/dev/Debian/tmp' [14.90 GiB] inherit
ACTIVE '/dev/Debian/Swap' [3.72 GiB] inherit
ACTIVE '/dev/Debian/Home' [440.52 GiB] inherit

Is there anything wrong with this partitioning scheme.

As Joel advised (thanks) I made a big 565 GB primary LVM partition.
and then created partitions as advised by Joel again.
Jochen Spieker
2014-02-11 16:00:11 UTC
Permalink
Anubhav Yadav:
> Okay, I installed debian now, a perfect install with LVM.
> Here is the output of df -H
>
> Filesystem Size Used Avail Use% Mounted on
> rootfs 8.7G 330M 7.9G 5% /

You usually can get away with a much smaller root filesystem if you use
a separate /usr. The good thing is that you won't run into trouble with
dozens of kernels installed (they take more than 100MB in /lib/modules
each).

> Is there anything wrong with this partitioning scheme.

Looks generally fine. One thing I would have done (and which I cannot
see whether you did it) is to leave some free space in the VG so you can
extend existing filesystems or create new ones when you need to.

> As Joel advised (thanks) I made a big 565 GB primary LVM partition.
> and then created partitions as advised by Joel again.

It helps in our understanding if you try to use the correct terminology.
What I think you wanted to say is something like:

"I created one big primary partition and used that as Physical Volume
for my Volume Group 'Debian'. I created all filesystems in Logical
Volumes in that Volume Group."

I capitalized terms specific to LVM.

J.
--
A passionate argument means more to me than a blockbuster movie.
[Agree] [Disagree]
<http://www.slowlydownward.com/NODATA/data_enter2.html>
Roger Leigh
2014-02-11 21:01:31 UTC
Permalink
On Tue, Feb 11, 2014 at 05:00:11PM +0100, Jochen Spieker wrote:
> Anubhav Yadav:
> > Okay, I installed debian now, a perfect install with LVM.
> > Here is the output of df -H
> >
> > Filesystem Size Used Avail Use% Mounted on
> > rootfs 8.7G 330M 7.9G 5% /
>
> You usually can get away with a much smaller root filesystem if you use
> a separate /usr. The good thing is that you won't run into trouble with
> dozens of kernels installed (they take more than 100MB in /lib/modules
> each).

There is little point in having a separate /usr on a Debian system. Both
/ and /usr are managed by dpkg, so splitting them gains you nothing. On
other systems it might make more sense, but on a Linux system with dpkg
or rpm, it's not something which I can recommend. I had a separate /usr
for over decade until I thought about it long and hard and realised this.

The content of /, and /usr (and /boot for most people) are a managed
whole. /var is the only part which can be properly justified in
splitting since it's writable, likewise for user data in /home or /srv.


Regards,
Roger

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Roger Leigh
2014-02-11 21:14:02 UTC
Permalink
On Tue, Feb 11, 2014 at 09:11:50PM +0530, Anubhav Yadav wrote:
> Okay, I installed debian now, a perfect install with LVM.
> Here is the output of df -H
>
> Filesystem Size Used Avail Use% Mounted on
> rootfs 8.7G 330M 7.9G 5% /
> udev 11M 0 11M 0% /dev
> tmpfs 830M 754k 830M 1% /run
> /dev/mapper/Debian-Root 8.7G 330M 7.9G 5% /
> tmpfs 5.3M 0 5.3M 0% /run/lock
> tmpfs 1.7G 0 1.7G 0% /run/shm
> /dev/mapper/Debian-Home 466G 427M 442G 1% /home
> /dev/mapper/Debian-tmp 16G 174M 15G 2% /tmp
> /dev/mapper/Debian-Usr 32G 2.8G 28G 10% /usr
> /dev/mapper/Debian-Var 32G 461M 30G 2% /var
>
> And this is what lvsan says
> ACTIVE '/dev/Debian/Root' [8.19 GiB] inherit
> ACTIVE '/dev/Debian/Usr' [29.80 GiB] inherit
> ACTIVE '/dev/Debian/Var' [29.80 GiB] inherit
> ACTIVE '/dev/Debian/tmp' [14.90 GiB] inherit
> ACTIVE '/dev/Debian/Swap' [3.72 GiB] inherit
> ACTIVE '/dev/Debian/Home' [440.52 GiB] inherit
>
> Is there anything wrong with this partitioning scheme.

This looks fine. Some suggestions/comments though:

- I wouldn't personally bother with a separate /usr. You could
move this onto the Root LV and remove the Usr LV.
- The size of the Root LV is more than plenty for /+/usr
combined for all but the biggest installs.
- The size of /home is very big. Not really a problem if you
need that much space right away, but I'd personally have made
it about ten times as small and left the remaining space
unallocated until I was sure what I needed it for.

- If you made the tmp LV into swap space so that you had around
18GiB swap total, you could then mount a tmpfs on /tmp which
could potentially be much faster, depending upon the amount of
memory in the system. Or just remove it and extend the tmp LV
--now that you're on LVM you have the power to play around with
this! See /etc/default/tmpfs.


Regards,
Roger

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Ralf Mardorf
2014-02-11 22:20:11 UTC
Permalink
On Tue, 2014-02-11 at 21:14 +0000, Roger Leigh wrote:
> > ACTIVE '/dev/Debian/Root' [8.19 GiB] inherit
> - The size of the Root LV is more than plenty for /+/usr
> combined for all but the biggest installs.

No, it isn't, if you will test plenty of DEs and if you will add many
large sized applications, e.g. several different office software, DAWs,
GUI editors etc., IOW there is no reason why somebody shouldn't install
nearly all available packages.
Andrei POPESCU
2014-02-11 22:34:30 UTC
Permalink
On Ma, 11 feb 14, 23:20:11, Ralf Mardorf wrote:
> IOW there is no reason why somebody shouldn't install
> nearly all available packages.

Of course there is. Besides the waste of space there are many packages
that run daemons which one might not need.

Kind regards,
Andrei
--
http://wiki.debian.org/FAQsFromDebianUser
Offtopic discussions among Debian users and developers:
http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic
http://nuvreauspam.ro/gpg-transition.txt
Ralf Mardorf
2014-02-11 22:56:34 UTC
Permalink
On Wed, 2014-02-12 at 00:34 +0200, Andrei POPESCU wrote:
> On Ma, 11 feb 14, 23:20:11, Ralf Mardorf wrote:
> > IOW there is no reason why somebody shouldn't install
> > nearly all available packages.
>
> Of course there is. Besides the waste of space there are many packages
> that run daemons which one might not need.

That's not completely true. Users perhaps want to compare a lot of
software, before they decide what DEs and apps they prefer to use. This
isn't a waste of space. Running unneeded daemons could cause issues,
e.g. if you care for real-time, but even if many unneeded daemons are
installed as hard dependencies, there's no need to start all those
daemons.

There also often isn't the need to e.g. have all USB ports available, it
could make sense to unbind some of them, but less Linux users ever
unbind nasty USB ports.

There is no rule that / including /usr won't be much large than 8.19
GiB. For many installs it's indeed plenty space, but for other installs
it could be not enough space.
Scott Ferguson
2014-02-11 23:47:42 UTC
Permalink
On 12/02/14 09:56, Ralf Mardorf wrote:
> On Wed, 2014-02-12 at 00:34 +0200, Andrei POPESCU wrote:
>> On Ma, 11 feb 14, 23:20:11, Ralf Mardorf wrote:
>>> IOW there is no reason why somebody shouldn't install
>>> nearly all available packages.

Depending on your definition of "reason" ;p

>>
>> Of course there is. Besides the waste of space there are many packages
>> that run daemons which one might not need.
>
> That's not completely true. Users perhaps want to compare a lot of
> software, <snipped>

Or just test the fair use clause in their broadband contract with their
ISP. Unless the deal is not unlimited downloads, in which case they may
also be testing their spending limits in excess download fees.

Just because you can install everything doesn't mean you shouldn't
upgrade it.... and that can be expensive.

Consider also that the average "user" is never going to "compare" the
thousands of available packages. Just develop the attention span of a
fruit fly and end up "trying" half of nine different things, (none of
them properly) like the end result of multi-booting various OS magnified.


Kind regards
Joel Rees
2014-02-12 11:32:35 UTC
Permalink
Rather than answering all the naysayers individually, I'll explain
again, publicly, why I suggested the numbers I did.

On Wed, Feb 12, 2014 at 12:41 AM, Anubhav Yadav <***@gmail.com> wrote:
> Okay, I installed debian now, a perfect install with LVM.
> Here is the output of df -H
>
> Filesystem Size Used Avail Use% Mounted on
> rootfs 8.7G 330M 7.9G 5% /

For those who have complained that is too big:

Sometimes, /var fails to mount. And by the time you notice, the causes
and symptoms of the failed mount have dumped several gigabytes of log
messages in your root file system, because when /var the partition
(volume) doesn't mount, /var is in your root partition.

Sure, if you know what you are doing, it's not going to happen.
Probably. But it can happen, and an inexperienced user doesn't want to
be trying to figure out why /var/log is filling up his root partition
when he can't even run vi because the root partition is reporting over
100% full.

That's not the only reason, but it's a pretty good reason.

> udev 11M 0 11M 0% /dev
> tmpfs 830M 754k 830M 1% /run
> /dev/mapper/Debian-Root 8.7G 330M 7.9G 5% /
> tmpfs 5.3M 0 5.3M 0% /run/lock
> tmpfs 1.7G 0 1.7G 0% /run/shm
> /dev/mapper/Debian-Home 466G 427M 442G 1% /home

Yes, I would suggest shrinking that one now, before it gets to 5%
full. 40G would have been great, but shrinking it to 100G now is
pretty sure to be still safe.

If you want that space available to save video or sound files, make a
separate partition (er, logical volume) and mount it under
/home/share/, make a "share" user:group pair, set it so "share" can't
log in, just to be a little paranoid, and then make all your ordinary
login users members of the "share" group. (How you arrange the
subfolders in /home/share and how you set the group write permissions
is a bit a matter of taste and your patterns of use.)

And then you can have that space for storing lots of stuff, and if you
end up needing the space, you can move what you stored there out,
remove the logical volume, and add the space to other things.

> /dev/mapper/Debian-tmp 16G 174M 15G 2% /tmp

Roger's comments on /tmp are interesting. But, since /tmp, by
definition, should never have anything in it that must be there after
the next boot, you can test your system out now and try his suggestion
about tmpfs later, when you are more comfortable. Or try it now. I'm
pretty sure he's right that it wouldn't cause problems.

> /dev/mapper/Debian-Usr 32G 2.8G 28G 10% /usr

I am not going to argue here about the current movement to conflate
/usr/bin and /bin. I know the history, I know the current state, and
all I can say is that just about everybody seems to be blind about the
direction this should be going.

In Fedora, you don't currently want a separate /usr/bin because the
guys over there can't figure out how to boot without using stuff in
/usr/bin. The engineers here seem to be able to figure that one out.
And for reasons similar to the one I mentioned (and refrained from
mentioning) above, about /var, it's better to set a wall there. In
Debian, now, you can set the wall, so you should, even if there are
things this side of the wall that should be that side, and vice-versa.

> /dev/mapper/Debian-Var 32G 461M 30G 2% /var
>
> And this is what lvsan says
> ACTIVE '/dev/Debian/Root' [8.19 GiB] inherit
> ACTIVE '/dev/Debian/Usr' [29.80 GiB] inherit
> ACTIVE '/dev/Debian/Var' [29.80 GiB] inherit
> ACTIVE '/dev/Debian/tmp' [14.90 GiB] inherit
> ACTIVE '/dev/Debian/Swap' [3.72 GiB] inherit
> ACTIVE '/dev/Debian/Home' [440.52 GiB] inherit
>
> Is there anything wrong with this partitioning scheme.

Heh. There is never anything perfect when you talk about setting up
computer systems, so, yeah, there is likely to be something you'll
find you don't like about it. But if you'll shrink /home now, you'll
have lots of room to figure out what you prefer, and the hardest thing
I've faced with trying to keep my systems behaving like I want them is
not having room to move things around.

> As Joel advised (thanks) I made a big 565 GB primary LVM partition.
> and then created partitions as advised by Joel again.

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.
Andrei POPESCU
2014-02-11 22:30:48 UTC
Permalink
On Ma, 11 feb 14, 10:15:20, Anubhav Yadav wrote:
> Okay I figured out how to make more partitions on LVM, just want to
> make sure if the bootable flag on the LVM should be ON or OFF

grub and/or Linux doesn't care much about the bootable flag.

Kind regards,
Andrei
--
http://wiki.debian.org/FAQsFromDebianUser
Offtopic discussions among Debian users and developers:
http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic
http://nuvreauspam.ro/gpg-transition.txt
Tom H
2014-02-12 07:51:19 UTC
Permalink
On Mon, Feb 10, 2014 at 11:45 PM, Anubhav Yadav <***@gmail.com> wrote:
>
> Okay I figured out how to make more partitions on LVM, just want to
> make sure if the bootable flag on the LVM should be ON or OFF

If you're using grub (which you must be if "/boot" is an LV or part of
an LV), you don't need that flag.
Schlacta, Christ
2014-02-10 00:35:54 UTC
Permalink
>
> 3) Use ZFS. Allocate the drive as a single zpool. You can then create
> zfs volumes for all the separate bits. However, you don't have the
> space wastage issues since all the data is in a single pool, and
> you can adjust the size allocations/quotas on demand for each
> individual volume (or leave them unset to give them as much space as
> they can get). Needs a kernel patch for the zfs driver. With
> kFreeBSD you can do this natively. It has all sorts of great
> features which I won't go into here.
>
> I've tried all three. For Linux, using LVM is easy and can be done
> in the installer. If you reinstall you can keep the LVs you want and
> wipe/delete the rest. For kFreeBSD, you can install directly onto ZFS;
> I've been using it for kFreeBSD and native FreeBSD installs, and it's
> the best of the lot--hopefully Debian can offer native support for
> Linux at some point [currently needs patching, and the patches don't
> work with current 3.12 kernels]
>
>
I use zfs with debian wheezy, and am migrating to using zfs almost
exclusively for my local deployments. The "kernel patches" are actually
just an add-on module that builds and installs from a repo maintained for
wheezy, which uses DKMS to manage the actual kernel module.

As for not working with 3.12, that's a known issue, and a patch is in
head. The ZFS team is closing a few more bugs and preparing a new release
of ZFS soon, which will build against 3.12.

There's currently discussion on one of the debian lists (I forget which)
about removing the possibility of having nearly-native zfs support in
debian-installer I'm wasn't subscribed to the list at the time of the last
post to this thread, so I can't reply to it. Unfortunately, due to the
nature of the CDDL (The license ZFS is under), No Linux distro will ever be
able to deploy native binary modules with the installer or in default
repositories any time soon.

ZFS is awesome, and I've never lost any data using ZFS, despite some bad
situations including marginal drives and even drive failure. I highly
recommend this solution for everyone who's competent enough to understand
the procedures involved and the achievable benefits. Unfortunately, due to
the proprietary nature of the CDDL, I don't foresee this being a solution
"for the masses" any time soon.
Zenaan Harkness
2014-02-10 02:58:01 UTC
Permalink
On 2/10/14, Schlacta, Christ <***@aarcane.org> wrote:
>> 3) Use ZFS. Allocate the drive as a single zpool. You can then create

>> I've tried all three. For Linux, using LVM is easy and can be done

> I use zfs with debian wheezy, and am migrating to using zfs almost

> ZFS is awesome, and I've never lost any data using ZFS, despite some bad
> situations including marginal drives and even drive failure. I highly
> recommend this solution for everyone who's competent enough to understand
> the procedures involved and the achievable benefits. Unfortunately, due to
> the proprietary nature of the CDDL,

http://en.wikipedia.org/wiki/CDDL

CDDL is FSF approved, so it is definitely a "Free-Software" (libre)
license. It's just that it's not GPL compatible, which is unfortunate,
but I wouldn't worry about that - just install ('auto build' or
whatever) the ZFS module.
Roger Leigh
2014-02-09 21:06:40 UTC
Permalink
On Wed, Feb 05, 2014 at 12:31:06PM -0600, John Hasler wrote:
>
> yaro wrote:
> > Separate /usr is unneeded and actually complicates boot for little benefit.
>
> It allows you to mount it read-only (or not at all when there's a
> problem). It only complicates boot due to the practice of putting stuff
> that belongs under / under /usr.

Nowadays you can just have / readonly so /usr doesn't need to kept
separate.

--
.''`. Roger Leigh
: :' : Debian GNU/Linux http://people.debian.org/~rleigh/
`. `' schroot and sbuild http://alioth.debian.org/projects/buildd-tools
`- GPG Public Key F33D 281D 470A B443 6756 147C 07B3 C8BC 4083 E800
Loading...