Announcement

Collapse
No announcement yet.

Oracle Plans To Bring DTrace To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kebabbert
    replied
    Yes, I know that BTRFS will be default filesystem for Oracle Linux. That is perfectly in order, because Linux is inferior to Solaris, and BTRFS is inferior to ZFS.

    Larry Ellison said that Linux is for lowend, and Solaris is for Highend. I cant find that link now, but if you want, I can google for that link. He also said

    Ellison now calls Solaris "the leading OS on the planet,"
    It makes sense to use ZFS for highend Solaris, and BTRFS for lowend Linux. Of course, if Larry really were serious with making Linux as good as Solaris, then Larry would relicense ZFS so Linux could use it. But Larry is not relicensing ZFS - why is that? ZFS is better than BTRFS, and ZFS is mature. Larry wants to keep Solaris for highend features, and Linux for lowend.

    But I am surprised that DTrace comes to Linux, because DTrace is much better than anything Linux has. Does really Larry want Linux to be as good as Solaris?

    There is a huge technical post by a famous DTrace contributor, where he compares Systemtap to DTrace. Among others, he says that Systemtap might crash the server which makes it not usable for production servers. Whereas DTrace is safe and can be used in production servers. He goes on and on, and it seems that Systemtap has missed some of the main points of having a dynamically instrument like DTrace.


    But if DTrace comes to Linux, that is good for Linux. Here is the main architect of DTrace, trying out Linux DTrace.

    Leave a comment:


  • kraftman
    replied
    Originally posted by kebabbert View Post
    Holy shit. This can not be true?? Jesus. That is really a bad design choice from the BTRFS team. I really hope this is not true, because that will make BTRFS much worse than I ever imagined. Are you sure?
    Well, I don't care about PR bull and crappy and biased benchmarks against very old Linux systems. System that has 30% slower binaries with highest optimization simply can't be fast, can it? The reality shows slowlaris isn't interesting for Oracle, but they keep it because of old sun (thankfully's dead) customers. As for btrfs being 64bit system that's true and that's very good design choice.

    Leave a comment:


  • simcop2387
    replied
    Originally posted by kebabbert View Post
    I am talking about design problems with ext4, making it vulnerable to data corruption. Because ext4 has no protection to data corruption. Ext4, by design, is not able to detect nor repair data corruption. Computer science researchers have proved this, with research at universities.

    Regarding ZFS. ZFS, by design, is able to detect and repair data corruption. Thus, design of ZFS is safe. Of course there are bugs in ZFS which means that people has had problems even when running ZFS. When the last bugs are gone, ZFS will be a completely safe filesystem. (That will probably not happen, because every complex software has bugs no matter how hard you try to find them).

    The thing is, ext4 does have bugs, and also is not safe by design - which makes ext4 vulnerable to data corruption. ZFS is safe by design - which has been proven by researchers.
    ZFS will always be able to detect data corruption (I believe), but repair will only happen if you are using one of the many RAID-Z configurations. Admittedly though if you're using ZFS and NOT running with RAID-Z then you really shouldn't be managing servers.

    Comparing ZFS to EXT4 though is rather unfair. EXT4 was as you say never designed for that protection. A more correct comparison would be to compare UFS to EXT4, as they both serve the same purpose. And before you say that you shouldn't ever use UFS, there are times where the overhead of ZFS shouldn't be bothered with, say when you are running many virtual solaris systems with all of their storage already on ZFS. There's no reason to put ZFS inside of ZFS.

    Leave a comment:


  • kraftman
    replied
    Originally posted by kebabbert View Post
    I am talking about design problems with ext4, making it vulnerable to data corruption. Because ext4 has no protection to data corruption. Ext4, by design, is not able to detect nor repair data corruption. Computer science researchers have proved this, with research at universities.

    Regarding ZFS. ZFS, by design, is able to detect and repair data corruption. Thus, design of ZFS is safe. Of course there are bugs in ZFS which means that people has had problems even when running ZFS. When the last bugs are gone, ZFS will be a completely safe filesystem. (That will probably not happen, because every complex software has bugs no matter how hard you try to find them).

    The thing is, ext4 does have bugs, and also is not safe by design - which makes ext4 vulnerable to data corruption. ZFS is safe by design - which has been proven by researchers.
    This is meaningless. ZFS is not safe by design, because in scenarios when there are long pauses between fs checks it can be corrupted. I showed you this once. You can only say ZFS is safer by default. There are ways than can make Ext4 nearly completely safe. Those 'computer science researches' didn't prove this, but they showed it has some mechanisms that can help in fighting silent data corruption.

    Leave a comment:


  • Zetbo
    replied
    http://article.gmane.org/gmane.comp....onest+timeline

    Inside of Oracle, we've decided to make btrfs the default filesystem for
    Oracle Linux. This is going into beta now and we'll increase our usage
    of btrfs in production over the next four to six months. This is a
    really big step forward, but it doesn't cover btrfs in database
    workloads (since we recommend asm for that outside of the filesystem).

    What this means is that absolutely cannot move forward without btrfsck.
    RH, Fujitsu, SUSE and others have spent a huge amount of time on the filesystem
    and it is clearly time to start putting it into customer hands.

    Leave a comment:


  • kebabbert
    replied
    Originally posted by kraftman View Post
    You knew zfs is also unsafe and can corrupt your data, but you ignored that FACT.
    I am talking about design problems with ext4, making it vulnerable to data corruption. Because ext4 has no protection to data corruption. Ext4, by design, is not able to detect nor repair data corruption. Computer science researchers have proved this, with research at universities.

    Regarding ZFS. ZFS, by design, is able to detect and repair data corruption. Thus, design of ZFS is safe. Of course there are bugs in ZFS which means that people has had problems even when running ZFS. When the last bugs are gone, ZFS will be a completely safe filesystem. (That will probably not happen, because every complex software has bugs no matter how hard you try to find them).

    The thing is, ext4 does have bugs, and also is not safe by design - which makes ext4 vulnerable to data corruption. ZFS is safe by design - which has been proven by researchers.

    Leave a comment:


  • kebabbert
    replied
    Originally posted by kraftman View Post
    And do you have any? It is widely known Oracle wants to kill old, crappy, legacy slowlaris
    Well, your claim contradicts official Oracle plans and roadmaps.

    "The Solaris operating system is by far the best Unix technology available in the market," Ellison said. "That explains why more Oracle databases run on the Sun Sparc-Solaris platform than any other computer system."

    Regarding "slowlaris", I have showed you numerous benchmarks where Solaris is faster than Linux. In fact, what you call "Slowlaris" has several world records today, beating everyone else. Here are several official benchmarks, showing that Solaris is fastest in the world. Just look at some entries here:


    Do you have any support for your claim that Oracle wants to kill Solaris, or is it the old FUD again? You confessed you FUD sometimes, and I would not be surprised if this is just more of your old FUD. Can you show any links that show that Oracle wants to kill Solaris?



    and this is one of the reasons why they're working on better btrfs.
    Well, ZFS makes money for Oracle today. BTRFS does not make money. And BTRFS is really buggy and unstable. If Oracle where really serious with BTRFS, then Oracle would reassign lots of developers to BTRFS. Oracle would kill off ZFS and kill Solaris, and reassign all those Solaris developers. That has not happened.



    The known fact is you're dumb troll from osnews.
    Lots of insults from the Linux fans. Why is that? Is it because Linus Torvalds is a prick, calling OpenBSD developers for "masturbating monkeys" because they focus on security? With such a master....



    Btw. btfrs unlike zfs is 64bit system, so there's lower overhead.
    Holy shit. This can not be true?? Jesus. That is really a bad design choice from the BTRFS team. I really hope this is not true, because that will make BTRFS much worse than I ever imagined. Are you sure?

    Leave a comment:


  • kraftman
    replied
    Originally posted by kebabbert View Post
    The reason you can not convert ext4 to ZFS, is because no Solaris user runs that unsafe filesystem called ext4. I hope you know that ext4 is unsafe, and might corrupt your data.
    You knew zfs is also unsafe and can corrupt your data, but you ignored that FACT.

    Leave a comment:


  • kraftman
    replied
    Originally posted by kebabbert View Post
    Thanks for your constructive remark.





    So you do have important facts? That BTRFS is superior because it uses B+ trees? Are you serious? Let me see..
    -BTRFS is the best!
    -Why?
    -Because it uses B+ trees!
    -So? Does that Btrfs superior? Why?
    -Just because it is, here is the important fact: "BTRFS is superior because it uses B+ trees!"
    -Eh?
    And do you have any? It is widely known Oracle wants to kill old, crappy, legacy slowlaris and this is one of the reasons why they're working on better btrfs. The known fact is you're dumb troll from osnews. Btw. btfrs unlike zfs is 64bit system, so there's lower overhead.

    Leave a comment:


  • Ibidem
    replied
    Originally posted by kebabbert View Post
    The SGI Altix server with thousands of cores, is the same thing. Just look at the benchmarks, they are all embarrasingly parallell workloads; that is; cluster workloads. Not SMP workloads.

    Someone explains:
    "I tried running a nicely parallel shared memory workload (75% efficiency on 24 cores in a 4 socket opteron box) on a 64 core ScaleMP box with 8 2-socket boards linked by infiniband. Result: horrible. It might look like a shared memory, but access to off-board bits has huge latency."

    So, you are wrong. There are no big "SMP" Linux servers on the market today. Of course, there are lots of clusters running Linux, and Linux is very good at running clusters. But for one fat huge server, the biggest Linux server I have seen benchmarks on, is 48 cores. There might be bigger. From the article above:
    I'll grant the point, although large SMP systems wouldn't make much sense outside of highly parallel loads, and ScaleMP is not SGI.
    Thus, either you re program your workload into a clustered workload, or you get an SMP server, a single fat server with 8-socket, or "if you are lucky and can find one, a 16-socket Xeon box". But I dont know if there are any 16 socket Linux boxes today. I know that Oracle sells an 8-socket x86 server, so you should install Linux onto that, but I dont know how well Linux would scale on 8-socket with 64 cores?
    The reasonable expectation: the same as it scales on a single-image cluster with 64 cores, if not better due to the reduced overhead.
    It's good enough for Oracle to certify RHEL, OEL, SuSE, and Oracle VM (Linux + Xen) on it, though I can't say that proves too much judging by Oracle's past moves.
    Ted Tso, ext4 creator, just recently explained that until now, 32 cores was considered exotic and expensive hardware to Linux developers but now that is changing and that is the reason Ted is now working on to scale up to as many as 32 cores. But Solaris/AIX/HP-UX/etc Kernel devs have for decades had access to large servers with many cpus. Linux devs just recently has got access to 32 cores. Not 32 cpus, but 32 cores. After a decade(?), Linux might handle 32 cpus too.
    Thanks to Eric Whitney’s benchmarking results, I have my money shot for my upcoming 2011 LCAtalk in Brisbane, which will be about how to improve scalability in the Linux kernel, using the case study of the work that I did to improve scalability via a series of scalability patches that were developed during 2.6.34, 2.6.35, and 2.6.36 (and went into the kernel during subsequent merge window). These benchmarks were done on a 48-core AMD system (8 sockets, 6 cores/socket) using a 24 SAS-disk hardware RAID array.



    I think it is well known that Linus has a big ego and can be a prick sometimes. That Linus would name his own creations after himself, is quite reasonable? Stallman said "I am not naming GNU for Stallmanix" - criticizing Linus for having big ego.
    Whoosh.
    In case you didn't notice, the second example (git) means "an obnoxious person" in British English.
    As far as how Linux got the name, I'd refer you to "Just for Fun"--but you probably wouldn't read it anyhow.
    Most cases I know of where he has been criticised, were jokes that went over wrong. There may be other cases.


    All this stuff you talk about, I dont know if people consider that tech being new and revolutionary? Is it unique and everyone drools over it? No.

    Can you name something that everyone drools over, and wants? For instance, ZFS, DTrace, etc. Something that is really hyped? I have never heard Solaris or IBM AIX gurus being excited over something that Linux has. Can you name something? But everyone is drooling over ZFS and DTrace, and either porting it, copying it or stealing it.
    First:
    Do you mean exclusively in the server world, or are you unaware of anything else existing?
    Because screen autorotate (first done on Linux) is now mandatory on mobile devices.
    Sun wrote DRM code for Solaris, to get acceleration on Intel graphics from Mesa code written for Linux
    (IE: Sun had to copy Linux features to port 3d acceleration from Linux to Solaris).

    Wake on Wireless Lan isn't something most 'gurus' would know about. It was a new feature in Linux 3.0 (which is a few months old), so few Linux users have heard about it. And perhaps if you're not running laptops, it isn't that interesting. But for a system administrator with mobile computers, or someone who wants have action xyz taken when a certain change in the networking takes place (maybe an alert when the router goes offline, or a better wardriving setup), it may be much more valuable.

    Linux compatability--It's proof that Linux has more applications. And Sun, and the BSDs, and SCO, and HP, all wished they had the same application base.
    Similarly, Sun put out a kit to port network drivers from Linux to Solaris, but had to pull it since it couldn't meet the GPL requirements.
    Kernel in userland--Probably not interesting to the average user or sysadmin. It does mean it's easier to test an environment from a different OS, and so on.
    Speaking of which, does Solaris support chroot install from another OS?

    Also, Linux has a 4K stack instead of 128k, making for fewer OOM conditions. From what I've heard, Linux network code is faster, but that could be outdated: can Solaris in a virtual machine saturate a 10G ethernet connection?


    By the way:
    B+ trees make for faster searches. Of course, a faster search for a corrupt file does no good, so I wouldn't say it makes BtrFS 'better' just yet.

    Leave a comment:

Working...
X