Announcement

Collapse
No announcement yet.

Demo Of The Lima Driver On The KDE Spark Tablet

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • WillyThePimp
    replied
    And, according to DRAMeXchange.com, 8Gb of LPDDR2 (1GB) costs, on average, 17.4$$, untold frecuency, but the usual is 533Mhz. 8GB readily surpasses the 100$$ mark, and this is not the end-user price.

    Leave a comment:


  • WillyThePimp
    replied
    Originally posted by Qaridarium
    no this is just wrong! 32? is the price for the consume with ALL included!!!!!



    no 32? is the end price with ALL inclusive means memory chips+platine+PCB routing+soldering+quality assurance+profit

    you talk just bullshit with "100?" a smart-phone costs maybe 25? more if you go from 1gb ram to 8gb ram.

    but the world goes the other way around! they put an extra 400% profit rate on top for an high-tech bonus! but this have nothing do do with costs only with making more profit out of the same.




    hey every time i go into a store and buy 8gb ram for 32? consumer price you are proved wrong. consumer price do not mean that this is the price for big companys.
    big companies maybe pay 0,80? per 1gb chip then 1? the production costs for complete 8gb ram and 1? for shipping and all the rest costs? this means 8,4? and the difference to 32 ? is only "Profit"

    in fact if i can buy 8gb ram for 32? then this is never the COST for a big company!!! this is just a fact and this makes you stupid as HELL!

    in fact i already can buy a mobile device for 300? with 16gb ram! this makes your claim 1000dollar completely wrong!

    and no sorry being stupid are not an argument to be banned so its sad you will not banned LOL
    I was refering to end-user price, but so it seems your cleverness doesn't let you see such simple implicit message.
    Texas Instrument's OMAP doesn't use RAM manufacturated by TI, they buy it, and make it PoP.
    And btw, where do you buy those 8GB of LPDDR2 (mobile) PoP RAM? You are so clueless, i feel pitty for you.

    Leave a comment:


  • ldesnogu
    replied
    Originally posted by Ansla View Post
    The ARM Thumb mode actually does something useful and does not add much to the complexity of the chips since it's a subset of the 32-bit ISA, not completely different ISA. BTW, isn't Thumb 2 just an extension to Thumb? From my understanding it didn't break compatibility to old Thumb, just added new opcodes.
    You are right, Thumb is a full 32-bit instruction set (with 16-bit instructions) and Thumb2 extends it.

    From user mode point of view the latest architecture revisions didn't break compatibility. Obviously from a system point of view some things had to change, but that will only impact kernel and device drivers developers

    Leave a comment:


  • Ansla
    replied
    My point was that the x86 16-bit mode is outdated with lots of useless features and completely unusable by modern OSs. The ARM Thumb mode actually does something useful and does not add much to the complexity of the chips since it's a subset of the 32-bit ISA, not completely different ISA. BTW, isn't Thumb 2 just an extension to Thumb? From my understanding it didn't break compatibility to old Thumb, just added new opcodes.

    Leave a comment:


  • curaga
    replied
    Originally posted by Ansla View Post
    Q, you neglect some "details" of the x86 architecture, namely it's huge legacy baggage from the late 70's - early 80's, the most obvious part of this baggage is the 16-bit mode of operation that your latest-gen 64-bit CPU still posses. This makes up for some pretty complex logic for stuff that is never used by modern applications like BCD arithmetic.

    Good news is that with BIOS fading away and being replaced by EFI (and hopefully coreboot) future generations of CPUs will no longer need to implement the 16-bit mode. But if by the time ARM comes with 64-bit CPUs x86 will still have 16-bit real mode you can bet the 64-bit ARM CPUs will be massively faster/watt then 64-bit x86 CPUs.
    BTW, ARM has a 16-bit mode too, it's called Thumb. They just occasionally break backwards compat, introducing Thumb2 etc, which x86 can't really do.

    Leave a comment:


  • Ansla
    replied
    Originally posted by Qaridarium
    you just compare apples with bananas. you can not compare a 32bit cpu with an 64bit cpu!
    i wait in the year 2014 arm will release there 64bit version then we can compare it without 32 bit fake!
    for me 32bit is a joke i like systems with 32gb ram and more and not 4gb ram.
    and no PAE is just fake! if you compare arm+PAE vs nativ 64bit then ARM lose all benchmarks only because of PAE.

    this means i don't believe your performance/watt argument because i only count nativ-64bit apps.
    Q, you neglect some "details" of the x86 architecture, namely it's huge legacy baggage from the late 70's - early 80's, the most obvious part of this baggage is the 16-bit mode of operation that your latest-gen 64-bit CPU still posses. This makes up for some pretty complex logic for stuff that is never used by modern applications like BCD arithmetic.

    Good news is that with BIOS fading away and being replaced by EFI (and hopefully coreboot) future generations of CPUs will no longer need to implement the 16-bit mode. But if by the time ARM comes with 64-bit CPUs x86 will still have 16-bit real mode you can bet the 64-bit ARM CPUs will be massively faster/watt then 64-bit x86 CPUs.

    And even if x86 does drop the obvious baggage, ARM CPUs will probably still be significantly faster/watt then x86 because of the bad influence that early 16-bit design has had on the future x86 ISAs, the 32-bit one mostly, but to a slighter degree also the 64-bit one.

    Probably what both AMD and Intel bet on is that GPGPU will become so widespread in the future that the actual CPU will become mostly irrelevant for any computation intense task so the inefficiency of the x86 architecture will go unnoticed

    Leave a comment:


  • WillyThePimp
    replied
    Originally posted by smitty3268 View Post
    There are some advantages to the x86_64 architecture over the x86. That cannot be said in general for all architectures, though - stating that a generic ARM cpu must be 64 bit or it sucks is just stupid.
    Either is impliying that there is a need for it to be so, i develop software, and my 3GB laptop suits me more than enough. If i ever step up from my current work to a more complex and demanding eviroment, so will do my tools. Tablets and Mobile are used mainly by socialite-bussinessman, public communicators and people that don't do interesting things. I have hard times programming or even writing long stuff on a 7" touchscreen. There are no such things as a mobile proton collider to go with you scientific needs alongside your tablet, and touch panels aren't reliable enough for any (serious) artist (I can't paint shit on one as i do on paper), etc.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by WillyThePimp View Post
    32 Euros to manufacture, but the added PCB routing, complexity, soldering, and quality assurance surely makes it go over 100 euros if you want to make some $$ off that. By any chance, could you explain to us the need of 64 bits in a mobile landscape? If such need existed, why aren't we using MIPS as of now? What are the advantages of 64 bits over 32 bits? Does it make sense to trade off a feature that isn't not a feature in a mobile context for an added power consumption?

    And just listing off the top of my mind workloads wich could benefit from 64 bits memory addressing, how important is for scientists working on large simulations , let's say,in the quantum chemistry subject, to compute it on a mobile tablet? A 3D artist would be rendering something in his/her tablet? heck, and i ran out of examples.

    Btw, i vote for you being banned. The other guy, i like him much more than you. He has rethoric.

    And I'm a computer science student, i know what im talking about.
    There are some advantages to the x86_64 architecture over the x86. That cannot be said in general for all architectures, though - stating that a generic ARM cpu must be 64 bit or it sucks is just stupid.

    Leave a comment:


  • WillyThePimp
    replied
    Originally posted by Qaridarium

    you are stupid as hel because right now you can buy 1000dollar smartphones without 8gb ram and without 64bit.
    1gb ram only cost 4€ right now this means only 32€ for 8gb ram and the OEMs get it cheaper also 20€
    you really think 20€ of ram is a big part on a 1000 dollar smartphone?
    and 64bit if they use mibs they have 64bit already and in 2014 64bit ARM chips are the standart.

    also "GHz" burns more heat than 64bit in fact you can do the same with less MHZ with 64bit in the same time this means 64 saves you energy* (*=if you need more than 4gb ram compared to PAE)

    and hey i vote for this version: '''Anybody in favor of sending a request to the forum admins for banning idiots like Sonadow on the grounds of:'''
    32 Euros to manufacture, but the added PCB routing, complexity, soldering, and quality assurance surely makes it go over 100 euros if you want to make some $$ off that. By any chance, could you explain to us the need of 64 bits in a mobile landscape? If such need existed, why aren't we using MIPS as of now? What are the advantages of 64 bits over 32 bits? Does it make sense to trade off a feature that isn't not a feature in a mobile context for an added power consumption?

    And just listing off the top of my mind workloads wich could benefit from 64 bits memory addressing, how important is for scientists working on large simulations , let's say,in the quantum chemistry subject, to compute it on a mobile tablet? A 3D artist would be rendering something in his/her tablet? heck, and i ran out of examples.

    Btw, i vote for you being banned. The other guy, i like him much more than you. He has rethoric.

    And I'm a computer science student, i know what im talking about.
    Last edited by WillyThePimp; 13 February 2012, 09:59 PM. Reason: Added " the"

    Leave a comment:


  • MartinK
    replied
    Originally posted by DarkCloud View Post
    The biggest problem with KDE on any tablet is that it looks terrible. And its behavior is no better. No design just a giant hack. KDE looks ok on a desktop because the desktop paradigm that it follows is rather an old standard - title bars consisting of close, expand, minimize buttons, etc. These don't translate to the tablet
    I personally don't really care about the user interface. This is the first modern tablet computer where you can use a proper GUI toolkit (Qt + QML) and other parts of a full Linux stack. So long the GUI lets me launch my Qt + Python application, I'm fine.

    Leave a comment:

Working...
X