Announcement

Collapse
No announcement yet.

AMD Shanghai Opteron: Linux vs. OpenSolaris Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by kebabbert View Post
    What do you think about several theorems that tell almost the same thing, or one big theorem that solves all cases? Which do you prefer? Several theorems that are used depending on different situations, or one theorem that is always used?
    I would say model, not theorem. I understand how things work in terms of a mental model that's like a simple simulator that I predict their behaviour with. I think that's what you're trying to get at. I don't quite know what "solve" is an analogy for when it comes to understanding operating systems, though.

    Different size systems have different bottlenecks, and sometimes you only have to worry about a few of them. I'm trying to think back to things I've done that required understanding kernel behaviour. In the cases I'm thinking of, it's always been some small aspect of the kernel that mattered for what I was doing. So I'd say I have different models of the different parts of the system. If I needed to, I could think about how those pieces fit together to unstand how Linux as a whole works (when running on hardware I know enough about, which for me these days is just AMD64 PCs. I have a vague idea of how highmem on ia32 works, but it sounded horrible so I put that on my list of good reasons to use amd64 and wish ia32 would curl up and die.)

    On smaller machines, you have to know e.g. does grep -r on a big source tree start to make your desktop swap out, so the programs you have open are slow while they page back in again. (the answer is yes if vm.swappiness is set to 60 (a common default), so set it to more like 20 or 30 if you run e.g. bittorrent on a desktop.) http://www.sabi.co.uk/blog/ has some good comments about Linux's VM, and GNU/Linux desktops, being written by well-funded devs with their big fancy machines, and how GNU/Linux has serious weaknesses on memory-constrained systems. If you have plenty of RAM (relative to what you're doing), you don't have to care about lots of VM behaviour.

    Another aspect of Linux that I remember figuring out was when I wanted to run cycle-accurate benchmarks of a routine I was optimizing. (http://gmplib.org/list-archives/gmp-...ch/000789.html) I ran my benchmark loop at realtime priority, so it would have 100% of a CPU core. When Linux scheduled it on the same CPU that handled keyboard interrupts, it froze the system until it was done. I found out that Linux (on my system, check your own /proc/interrupts) handles all interrupts on CPU0, and you can give a process all of a CPU without breaking your system by using taskset 2 chrt, to put it on the other CPU core. So for this I had to think only about how Linux's scheduler and interrupt handling worked (on amd64 core2duo). I didn't have to think about the network stack, the VM, the VFS, or much else.

    Another time I was curious how Linux decides whether to send a TCP ACK by itself, or let a data packet ACK receipt of packets coming the other way on a TCP stream with data flowing in both directions. I never dug in enough to find out why it decides to sometimes send empty ACK packets, but not always. This behaviour isn't (AFAIK) connected to the scheduler, VM, or much outside the network stack.

    So a lot of the things that are coming to mind that I've wanted to know about Linux have been possible to understand in isolation. Which sounds to me like your "multiple theories". But if by situation you mean workload and machine type, not what part of the kernel you're trying to understand, then I think I tend to understand things in enough detail that those things would be parameters in my mental model, so it's really the same model over all conditions for whatever part of the kernel I'm trying to grok.

    I operate by delving into the details. I love details. (and, since I have ADHD, I have a hard time not getting wrapped up in details, as everyone can probably tell from my posts!) At the level of detail I like to understand, a complete theory of how Linux behaves on a whole range of hardware would be more than I could keep in my head.

    This is maybe why I never saw eye to eye with on your wish for a complete theory that you could just remember that would tell you everything about how Linux worked. I didn't really say anything before, because I couldn't think of a polite way to say that didn't make any sense to me.

    Comment


    • #92
      Originally posted by kraftman View Post
      Famous troll is back.
      Yeah, nice to meet you. :->

      Originally posted by kraftman View Post
      Linux is using mutexes. You want me to believe that Linux drivers are most important things in HPC and other areas? Does FreeBSD or OpenBSD use mutexes? I saw your trolling for years on some portals :> Why the hell OpenSolaris hung for about 10 seconds when I clicked on Firefox icon (in Sun's vbox and other systems work like a harm in it)? It seems in this case mutexes aren't helpfull. Can you give me some proofs?

      2006:

      http://kerneltrap.org/node/6019

      http://www.comptechdoc.org/os/linux/..._pgcmutex.html
      What I was talking about was synchronisation in the kernel. What you're talking about above is synchronisation between userland threads. Two completely unrelated things. The fact that Linux uses spinlocks is one of the reasons that its performance drops noticeably under high load on many CPUs. Other operating systems use fully functional mutexes, along with interrupt threads.

      Comment


      • #93
        Originally posted by trasz View Post
        Yeah, nice to meet you. :->
        Nice to meet you too

        What I was talking about was synchronisation in the kernel. What you're talking about above is synchronisation between userland threads. Two completely unrelated things. The fact that Linux uses spinlocks is one of the reasons that its performance drops noticeably under high load on many CPUs. Other operating systems use fully functional mutexes, along with interrupt threads.
        I think they converted spinlocks to mutexes even it this area:

        http://lkml.org/lkml/2005/12/19/80
        http://lkml.org/lkml/2008/4/21/279

        Is this what you said based on some articles or you spent a while on searching lkml? :> Aren't you talking about problem with crappy malloc library?

        In this article is mentioned about pthreads mutex (it's 2000) and RTLinux:

        http://mae.pennnet.com/display_artic...secret-weapon/

        Mutexes are important for RT aren't they?
        Last edited by kraftman; 02-16-2009, 05:50 PM.

        Comment


        • #94
          Originally posted by llama View Post
          I would say model, not theorem. I understand how things work in terms of a mental model that's like a simple simulator that I predict their behaviour with. I think that's what you're trying to get at. I don't quite know what "solve" is an analogy for when it comes to understanding operating systems, though.

          Different size systems have different bottlenecks, and sometimes you only have to worry about a few of them. I'm trying to think back to things I've done that required understanding kernel behaviour. In the cases I'm thinking of, it's always been some small aspect of the kernel that mattered for what I was doing. So I'd say I have different models of the different parts of the system. If I needed to, I could think about how those pieces fit together to unstand how Linux as a whole works (when running on hardware I know enough about, which for me these days is just AMD64 PCs. I have a vague idea of how highmem on ia32 works, but it sounded horrible so I put that on my list of good reasons to use amd64 and wish ia32 would curl up and die.)

          On smaller machines, you have to know e.g. does grep -r on a big source tree start to make your desktop swap out, so the programs you have open are slow while they page back in again. (the answer is yes if vm.swappiness is set to 60 (a common default), so set it to more like 20 or 30 if you run e.g. bittorrent on a desktop.) http://www.sabi.co.uk/blog/ has some good comments about Linux's VM, and GNU/Linux desktops, being written by well-funded devs with their big fancy machines, and how GNU/Linux has serious weaknesses on memory-constrained systems. If you have plenty of RAM (relative to what you're doing), you don't have to care about lots of VM behaviour.

          Another aspect of Linux that I remember figuring out was when I wanted to run cycle-accurate benchmarks of a routine I was optimizing. (http://gmplib.org/list-archives/gmp-...ch/000789.html) I ran my benchmark loop at realtime priority, so it would have 100% of a CPU core. When Linux scheduled it on the same CPU that handled keyboard interrupts, it froze the system until it was done. I found out that Linux (on my system, check your own /proc/interrupts) handles all interrupts on CPU0, and you can give a process all of a CPU without breaking your system by using taskset 2 chrt, to put it on the other CPU core. So for this I had to think only about how Linux's scheduler and interrupt handling worked (on amd64 core2duo). I didn't have to think about the network stack, the VM, the VFS, or much else.

          Another time I was curious how Linux decides whether to send a TCP ACK by itself, or let a data packet ACK receipt of packets coming the other way on a TCP stream with data flowing in both directions. I never dug in enough to find out why it decides to sometimes send empty ACK packets, but not always. This behaviour isn't (AFAIK) connected to the scheduler, VM, or much outside the network stack.

          So a lot of the things that are coming to mind that I've wanted to know about Linux have been possible to understand in isolation. Which sounds to me like your "multiple theories". But if by situation you mean workload and machine type, not what part of the kernel you're trying to understand, then I think I tend to understand things in enough detail that those things would be parameters in my mental model, so it's really the same model over all conditions for whatever part of the kernel I'm trying to grok.

          I operate by delving into the details. I love details. (and, since I have ADHD, I have a hard time not getting wrapped up in details, as everyone can probably tell from my posts!) At the level of detail I like to understand, a complete theory of how Linux behaves on a whole range of hardware would be more than I could keep in my head.

          This is maybe why I never saw eye to eye with on your wish for a complete theory that you could just remember that would tell you everything about how Linux worked. I didn't really say anything before, because I couldn't think of a polite way to say that didn't make any sense to me.
          Details are important. Yes, that is true. I have a double Master's degree in Math and another in Computer Science (algorithm theory). All this math has taught me that if you have several theorems that behave almost similar, then you can abstract and make them into one theorem. If you can not, then that theory is inferior and needs to be altered into something more general. Maybe that is the reason I think that one Solaris kernel is prefered, before 42 different Linux kernels depending on the task you are trying to solve. You know, different tools for different tasks is NOT scalability. You can never state Linux kernel is scalable, when you need to use different versions. Solaris install DVD is the same, no matter which machine. THAT is scalability. It is not something we have to agree on, or disagree on. It is a fact. Solaris is scalable, Linux is not. Otherwise, I could equivally say "C64 is scalable"; I just have to modify it. That is simply plain stupid to say so. It is nothing to agree upon or not, it is stupid to say so.

          But certainly you havent studied much math, so you dont understand what I am talking about or why I emphasize that all the time. "But I couldnt think of a polite way of saying that". If you want to get sticky, we can.
          Last edited by kebabbert; 02-17-2009, 06:24 AM.

          Comment


          • #95
            Originally posted by kebabbert View Post
            Maybe that is the reason I think that one Solaris kernel is prefered, before 42 different Linux kernels depending on the task you are trying to solve.
            In what? This is scalability:

            http://www.linfo.org/scalable.html
            http://www.research.ibm.com/Wearable...inuxwatch.html

            Solaris/Open Solaris isn't scalable and it chokes even on desktop computers (anyone tried to run it on a watch? xd). Maybe that is the reason I think that they call it slowlaris.

            Comment


            • #96
              Originally posted by kraftman View Post
              In what? This is scalability:

              http://www.linfo.org/scalable.html
              http://www.research.ibm.com/Wearable...inuxwatch.html

              Solaris/Open Solaris isn't scalable and it chokes even on desktop computers (anyone tried to run it on a watch? xd). Maybe that is the reason I think that they call it slowlaris.
              Solaris choking on desktop computers, are you refering to when you tried Solaris in VirtualBox and it paused for 10 secs? You know, in my humble opinion, it is wrong to draw that conclusion you do; that Solaris is slow. I have never had Solaris pause for 10 secs, and Ive ran Solaris for 10 years or so. Ive also never seen Solaris crash. Never seen it happen. The pausing could have happened because of VirtualBox, you never thought of that, did you? You know, VirtualBox is not the most stable product. Especially with solaris involved, VB doesnt work too well. On my laptop, OpenSolaris as a guest in VB, takes 3-4 minutes to move the mouse pointer one inch. I should not say "Solaris is dog slow". If I did, it would be wrong.


              And your links, the "Linux definition of scalability" I dont agree upon them. So you agree that C64 is scalable, right? I can run it on wristwatches to supercomputers, I just have to reprogram the whole kernel each new machine. If you ask me, that is not scalability. C64 is not scalable.

              Comment


              • #97
                Originally posted by kebabbert View Post
                Solaris choking on desktop computers, are you refering to when you tried Solaris in VirtualBox and it paused for 10 secs? You know, in my humble opinion, it is wrong to draw that conclusion you do; that Solaris is slow. I have never had Solaris pause for 10 secs, and Ive ran Solaris for 10 years or so. Ive also never seen Solaris crash. Never seen it happen. The pausing could have happened because of VirtualBox, you never thought of that, did you? You know, VirtualBox is not the most stable product. Especially with solaris involved, VB doesnt work too well. On my laptop, OpenSolaris as a guest in VB, takes 3-4 minutes to move the mouse pointer one inch. I should not say "Solaris is dog slow". If I did, it would be wrong.


                And your links, the "Linux definition of scalability" I dont agree upon them. So you agree that C64 is scalable, right? I can run it on wristwatches to supercomputers, I just have to reprogram the whole kernel each new machine. If you ask me, that is not scalability. C64 is not scalable.
                Any system I tried worked perfectly in vbox, but there's another reason why I said so :> It seems that some of you are trying to move "battlefield" away from Solaris You can run C64 on supercomputers under emulator not natively (and C64 isn't comparable to any modern OS). The point is that Linux runs natively. Btw. what I see there are at least two definitions of scalability and probably both are correct.

                P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then

                Comment


                • #98
                  Originally posted by kraftman View Post
                  I think they converted spinlocks to mutexes even it this area:

                  http://lkml.org/lkml/2005/12/19/80
                  http://lkml.org/lkml/2008/4/21/279
                  No. This was about replacing so called 'semaphores' (actually, Linux' implementation of semaphores) with so called 'mutexes'. Spin locks are still the fundamental synchronisation mechanism.

                  Originally posted by kraftman View Post
                  Is this what you said based on some articles or you spent a while on searching lkml? :> Aren't you talking about problem with crappy malloc library?

                  In this article is mentioned about pthreads mutex (it's 2000) and RTLinux:

                  http://mae.pennnet.com/display_artic...secret-weapon/

                  Mutexes are important for RT aren't they?
                  Again, it's about _userland_ (pthreads) mutexes, which are completely unrelated to kernel synchronisation.

                  Comment


                  • #99
                    Originally posted by kraftman View Post
                    Any system I tried worked perfectly in vbox, but there's another reason why I said so :> It seems that some of you are trying to move "battlefield" away from Solaris You can run C64 on supercomputers under emulator not natively (and C64 isn't comparable to any modern OS). The point is that Linux runs natively. Btw. what I see there are at least two definitions of scalability and probably both are correct.

                    P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then
                    No, I am not trying to move the battle field. I am only saying that give me any OS, and I claim it is possible to reprogram it so it runs on whatever machine you want. For instance, I could heavily reprogram C64 to run on a large cluster. According to you, then C64 is scalable - because it runs natively on large clusters. I dont agree with your definition, that is not scalability.

                    And of course Linux guys find GNU/OpenSolaris interesting. I know Solaris guys that hate OpenSolaris. "It is not Solaris anymore" they say. It is GNU, with Solaris kernel. Of course Linux guys like Ian Murdock and his Debian, so they probably find OpenSolaris easier to like than Solaris. I personally think OpenSolaris is more Linux than Solaris. It is more Linux userland than Solaris. I am sceptical to OpenSolaris. Actually, Ive never tried OpenSolaris. Ive installed it in VB, but it took 5 min to move the mouse, so I just shut it down. I myself prefer Solaris.

                    And I dont understand what you mean with they "should have OpenSolaris 64 bits"? Solaris has been 64bits for many years. Upon install it chooses between 32bit and 64bit, automatically.





                    Trasz
                    Do you have any proof and links? Then maybe you could settle this discussion once and for all.

                    Comment


                    • Originally posted by kraftman View Post
                      P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then
                      Gee, not again! kraftman, Phoronix test suite *COMPILES THE SOFTWARE ON ITS OWN* (with an outdated gcc compiler on OpenSolaris), the binaries that come with OpenSolaris are just fine, don't worry. So once again, the grunt was about the test suite generating unoptimized binaries for OpenSolaris: 32 bit, without OpenMP support, etc., while at the same time generating optimized 64 bit binaries on Linux, with OpenMP support. Does that sound fair to you ?
                      Last edited by etacarinae; 02-18-2009, 09:41 AM.

                      Comment


                      • Originally posted by etacarinae View Post
                        Does that sound fair to you ?
                        Of course it's not. I would love to see some 'real world' benchmarks. Btw. I'm getting feelings that only benchmarking the same system against different settings makes sense.

                        @kebabbert

                        As I said I see at least two definitions of scalability, but I don't care about it too much

                        Trasz
                        Do you have any proof and links? Then maybe you could settle this discussion once and for all.
                        Sometimes I base my opinions on observations, personal feeling etc. so he may did the same in this case. I'm able to believe him and I don't care about it too much too :>

                        Comment


                        • Originally posted by kebabbert View Post
                          Details are important. Yes, that is true. I have a double Master's degree in Math and another in Computer Science (algorithm theory). All this math has taught me that if you have several theorems that behave almost similar, then you can abstract and make them into one theorem. If you can not, then that theory is inferior and needs to be altered into something more general. Maybe that is the reason I think that one Solaris kernel is prefered, before 42 different Linux kernels depending on the task you are trying to solve. You know, different tools for different tasks is NOT scalability. You can never state Linux kernel is scalable, when you need to use different versions.
                          A generic Linux kernel will scale pretty well (e.g. from a single-core desktop to a 16 core server or larger). But you could maybe wring a little more performance out of any one situation by specializing a kernel for that specific hardware (not per workload). Most people don't compile their own, because it doesn't help that much. And you'd have to recompile your own for every security update.

                          You don't need to use different versions. So stop saying that you do. A GNU/Linux distro like Ubuntu for AMD64 will scale quite well with its one universal kernel. Compiling your own custom kernel might help more at the extreme ends of the scalability range, e.g. on really big iron or single-core slow desktops, but we both agree already that no matter what you do, Linux probably isn't ready for really big iron the way Solaris is.

                          Solaris install DVD is the same, no matter which machine. THAT is scalability. It is not something we have to agree on, or disagree on. It is a fact. Solaris is scalable, Linux is not. Otherwise, I could equivally say "C64 is scalable"; I just have to modify it. That is simply plain stupid to say so. It is nothing to agree upon or not, it is stupid to say so.
                          You can get more scalability by building different binaries from the same code base. I don't see that as "modifying it". To get more scalability from the C64 "operating system"(?) you would need major rewrites, and first-writes of major features it doesn't have at all. Maybe you just picked an example that's too extreme, because it looks like a straw-man to me.

                          I can agree with you to this extent, though: Ubuntu GNU/Linux as a distro of compiled binaries scales to the range of machines that it targets: not-too-ancient desktops up through > 16-core servers at least. To go beyond that range, it helps to start customizing the distribution's source and rebuilding parts. Specifically, you can maybe gain some performance by editting the configuration files for the Linux kernel and rebuilding that package.

                          Even unmodified, Ubuntu will run on large machines (they compile the kernel with NR_CPUS=64, so cores beyond that will go unused. Linux itself claims it can be compiled for up to 512 cores). Maybe some bottlenecks will be worse than with a custom kernel that leaves out some options you don't need, and so on. I don't know how e.g. RHEL or SuSE their kernels, since I just use Ubuntu and sometime Debian.

                          If you want to talk about the scalability of a specific distro, without allowing customized kernels, then that's one thing. Linux itself doesn't have an official binary, so it's native form is the source. The scalability of Linux is not just the range of machines a hypothetical distro could build a single kernel binary to do well on, it's the range of machines that the kernels built from the same source code can handle. That's how I see it, anyway. Obviously you've seen my previous statement of this definition of scalability and rejected it, so that's one place where we disagree about word definitions more than what Linux is actually like.

                          But certainly you havent studied much math.
                          Not a lot of formal math, no. I have an undergrad B.Sc, combined honours in physics and c.s. And I was always more interested in the understanding-how-the-world-works part of physics than in the math formalism. So yeah, I guess I didn't know how much of an analogy you were intending with the word "theorem". Theorem = proven hypothesis, right? My mental models of how computers behave aren't usually formally proven.

                          so you dont understand what I am talking about or why I emphasize that all the time.
                          I think I'm getting closer, but I still don't know what sort of a theorem your one all-encompassing theorem that models openSolaris behaviour would be.

                          "But I couldnt think of a polite way of saying that". If you want to get sticky, we can.
                          Yeah, sorry, I was feeling snarky. I think we just have different ways of thinking about computers. I still don't understand how you use your way of understanding things in practice, which is why I gave some examples of how I use my way. I take back the "polite way of saying that" comment, because there's no reason for me to assume your way doesn't work well for you.

                          Comment


                          • Originally posted by kraftman View Post
                            In what? This is scalability:

                            http://www.linfo.org/scalable.html
                            I like that definition of scalability, and it's what I have in my when I say "scalability". That page is especially good because at the end they get to talking about why anyone should care about scalability when choosing hardware or software. e.g. knowing that we'll need a bigger system in a year after our business takes off and we have way more clients, we should go with something such that the time spent learning it won't be wasted later. I.e. we can get a bigger version of this hardware later, and the BIOS options will mostly look the same, and the lights will mean the same thing. And for software, once we learn our way around /proc and all that, we can use that knowledge when we put GNU/Linux on the upcoming bigger machine. This is definitely the case even if you customize Linux a bit for your bigger or smaller machine. Since like I said, it doesn't make it behave qualitatively different, just maybe a little faster.

                            Comment


                            • Originally posted by trasz View Post
                              No. This was about replacing so called 'semaphores' (actually, Linux' implementation of semaphores) with so called 'mutexes'. Spin locks are still the fundamental synchronisation mechanism.

                              Again, it's about _userland_ (pthreads) mutexes, which are completely unrelated to kernel synchronisation.
                              Ok, what about this one:

                              http://lkml.org/lkml/2008/5/14/324

                              It's about
                              it turns the BKL into an ordinary mutex and removes all
                              "auto-release" BKL legacy code from the scheduler.'
                              and
                              The main disadvantage of giant lock is that it eliminates the concurrency, thus decrease the performance on multiprocessor systems.
                              So, it seems there are no performance penalties in Linux on multiprocessor systems after this change. And following this:

                              As some of the latency junkies on lkml already know it, commit 8e3e076
                              ("BKL: revert back to the old spinlock implementation") in v2.6.26-rc2
                              removed the preemptible BKL feature and made the Big Kernel Lock a
                              spinlock and thus turned it into non-preemptible code again. This commit
                              returned the BKL code to the 2.6.7 state of affairs in essence.
                              there weren't any before commit mentioned above.

                              I said before I believe you, but it will be nice if can give some proofs (because some people may not ).

                              Btw. Will such good efforts in RTLinux be possible if "Spin locks are still the fundamental synchronisation mechanism"? If yes, I don't see anything wrong in them.


                              EDIT:

                              Ok, I found myself There are spinlocks and Linux devs plan to make them preemptible etc. But it's related to real time Linux approach and I'm not so sure if this affects scalability. Btw. there are some changes like memory management improvements in 2.6.28 which should improve Linux scaling.
                              Last edited by kraftman; 02-19-2009, 10:47 AM.

                              Comment


                              • Originally posted by kraftman View Post
                                Ok, what about this one:

                                http://lkml.org/lkml/2008/5/14/324
                                This is about the Big Kernel Lock, which is the ugly hack Linux used in the early days, instead of having separate locks for each data structure that needed protection. I think the BKL isn't used anywhere important (well, that post did talk about AC working to remove it from TTY code).

                                The BKL obviously doesn't scale well, but if you're not using a crufty old driver that uses it, and your workload doesn't otherwise hit much BKL-using kernel code, it won't really hurt scalabilty.

                                For real-time applications, even one infrequent source of long-latency is unacceptable. e.g. you would not be happy if your music had a dropout even once in a couple hours. You can fix that with bigger buffers, but sometimes you really need low latency: e.g. running a control program for a robot seal balancing a ball on its nose. If it doesn't move soon enough in response to input that the ball is starting to roll off, the ball will fall. It doesn't matter that the average latency is great, it's the worst-case that's the deal-breaker for latency. That's the difference between how locking matters for scalability and for real-time applications.

                                Different locking primitives all have their uses. e.g. see this (old) article by Robert Love: http://www.linuxjournal.com/article/5833. But mostly, if a critical section is really short, a spinlock might be more appropriate than doing a context switch and coming back when the lock is unlocked. Otherwise you probably do want to sleep instead of busy-waiting. And there are multiple-readers, single-writer locks, and RCU (read-copy-update) data structures that let readers keep using the old copy while the writer constructs the new copy.

                                Comment

                                Working...
                                X