Announcement

Collapse
No announcement yet.

Khronos Officially Releases OpenXR 1.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by skeevy420 View Post
    I know that they can see quite a bit with multi-monitor setups, but you know as well as I do that using controls to move the view around is slower than naturally moving one's head and looking around. Think future forward to remote operated fighter jets operating a mach 3, pulling 15 Gs, and outperforming anything we know of that's declassified, the person operating that is going to want to be able to look and see with the minimal latency possible and a spherical camera setup and headset will provide that. Not to mention having to operate camera view controls as well as flight controls all at once. For a Predator styled drone the multi-monitor setup works; for a hypothetical fighter/interceptor drone they don't make much sense at all outside of cruising to the target.
    Funny that you are talking about minimal latency. The signal roundtrip time to your remote operated fighter jet will be like what 200-500 milliseconds? Then there would also be the issue of jamming or spoofing. (Semi)autonomous unmanned vehicles are where the future is heading.

    Comment


    • #32
      Originally posted by starshipeleven View Post
      You are NOT going to evade a missile that is faster than you, has orders of magnitude lower inertia and does not even need to care about flying properly because it's moving on thrust force alone. The procedures to counter missile locks are deploy countermeasures (flares or chaff), and if they fail to eject.
      Nonsense.

      Missiles can not turn tighter than a fighter jet simply due to the fact that they fly much faster, at Mach 3-4. Missiles have typically a very limited burn time (rocket engines). So they have to lead their target to minimize flight time. By changing your heading relative to the missile you force it to maneuver and bleed energy. A typical tactic is to turn hard and fly at roughly 90 degrees angle to attacking aircraft, forcing the missile to lead, followed by another hard turn after a few seconds to disengage. Air density can also be exploited by combining the turns with diving, thicker air reduces missile range considerably.

      The actual issue with missiles is to know that one was launched at you.

      Comment


      • #33
        Originally posted by coder View Post
        Because that's how you seem to treat it - like you're Richard Dawkins, here to show us the light and cure us of it.
        I'm just a humble messenger.

        Lying about what? What problem are you solving?
        Lying about what it can actually do. And I'm shutting down lies so you can focus on something that could make it really become real one day.

        Comment


        • #34
          Originally posted by coder View Post
          You don't know what you're talking about. Hololens was self-contained
          The prototype with the best capabilities, used in the first preview was the one tethered to a PC. Later ones were more limited, but portable.

          Self-contained ones have glitches https://www.theverge.com/2016/4/1/11...adset-hands-on

          One of us browsed ESPN, the other checked out The Verge, and both of us stood there, staring at the same wall, looking at different virtual web pages. Irritatingly, when you move your head just a little bit, the browser window in front of you moves with it. Score one for virtual reality headsets.

          and are otherwise immature https://www.geekwire.com/2019/holole...ality-headset/

          The demos I got to see where pretty basic, as the device is not yet a finished product. I was able to read GeekWire on a web browser pinned to the wall. I inspected a windmill and turbine, both of which I could rotate, re-size and pin to me, so that wherever I walked around the room the image would follow me.


          Plus there was a hands-on where I remember reviewers were complaining about a bridge hologram that was glitching through people and other complex objects.

          Also note the very basic environment with straight and clean lines in all demos of it you find. In a more natural (and cluttered) environment it would glitch.

          Today, you can download and run AR apps on regular iPhones and many Android phones.
          Did I cite Pokemon Go? Yes I did. I know. That's basic stuff though.

          Guess what? It didn't even ship in 2015 (nor was it planned to), so pre-release software issues are understandable. But the hardware of the original Hololens was finalized back then.
          And was insufficient. It's not a matter of software bugs, but of computing power.
          Current Hololens 2 is using Snapdragon 950, which is better but still inadequate.

          To actually have a decent shot at working reliably in a realistic non-controlled environment I expect NVIDIA Jetson boards or similar car-grade hardware as a bare minimum, a high-end mobile SoC won't cut it.

          That was my point - you seemed to imply there was a gap in hardware capabilities needed for AR,
          Yes. Maybe it's just a matter of standards. To have a system that does not make mistakes all over the place in a cluttered environment (which is a requirement for its use by military, rescue and all the other usecases you advocate) you need more processing power than is currently employed.

          As it is, it's little more than a toy with limited capabilities. Which is kind of bad for AR that is supposed to be used out in the field, not in a controlled environment.

          By what authority?
          Right or wrong don't come from a higher authority. Still stuck in a religious mindset?

          Comment


          • #35
            Originally posted by coder View Post
            Here's the current Wikipedia definition:
            Does not answer the question

            Comment


            • #36
              Originally posted by coder View Post
              The benefit that AR can provide is contextual awareness. It can provide you with relevant and important information for your specific context.
              I'm (also) saying that to get that level of contextual awareness you and others cite, you would need stupendous or even outright magic levels of sensor equipment and data-processing working on the background, which I don't see coming in the near future (decades).

              AR is just a display technology. Developing it now for those uses is decades too soon.

              Again, using the patrol example, it could also highlight subtle changes that occurred since previous patrols (possibly even by different soldiers), so you can see where potential IEDs might be hidden or where foes might be lying in ambush.
              I don't want to know the amount of false positives this system gives. In a natural environment there are tons of "subtle changes" every day due to random events and animals.

              And imagine being able to see where an incoming airstrike or artillery is going to hit, so you can see if it's on-target or if you're too close & need to take cover.
              Imagine being able to see the location of "friendlies", so you can better coordinate with them, as well as avoiding friendly-fire.
              This is a very dangerous double-sided blade. You need to be relaying information about airstrikes and friendly unit position all over the damn place. Enemies could tap into it because of vulnerabilities at any stage of your C&C system (i.e. all the way back to home base), plus of course in the field, and/or if they steal/hack your men's AR equipment.

              Plus, with soldiers kitted out with multiple cameras and full 3D tracking, they can relay the data back to a command post, so their commander can see exactly where they are, what they see, and what's around them. All in an integrated, multi-asset battlefield display, that he might even be viewing in VR to see in 3D.
              Their commander (for any number of people, even a single squad) cannot handle so much data alone.
              This can be at best sent to analysts for post-processing, a large team of people looking at different angles of each person's 3D feed in realtime, or you need some serious AI program to deal with it and provide useful info in real time.
              Last edited by starshipeleven; 31 July 2019, 06:56 AM.

              Comment


              • #37
                Originally posted by log0 View Post
                Funny that you are talking about minimal latency. The signal roundtrip time to your remote operated fighter jet will be like what 200-500 milliseconds? Then there would also be the issue of jamming or spoofing. (Semi)autonomous unmanned vehicles are where the future is heading.
                I agree that semi-autonomous is the way to go. However, that doesn't actually rule out the possibility of using VR & related tech in command and control of these craft.

                Comment


                • #38
                  Originally posted by starshipeleven View Post
                  Lying about what it can actually do.
                  And when did I do that?

                  Originally posted by starshipeleven View Post
                  And I'm shutting down lies so you can focus on something that could make it really become real one day.
                  Like a drunken pervert at a park, on a nice Sunday afternoon, the only problem you're solving here is your own boredom.

                  Please leave us alone.

                  Comment


                  • #39
                    Originally posted by starshipeleven View Post
                    The prototype with the best capabilities, used in the first preview was the one tethered to a PC. Later ones were more limited, but portable.

                    Self-contained ones have glitches https://www.theverge.com/2016/4/1/11...adset-hands-on
                    Dude, I've got news for you: pre-release software always has issues, which is why it hasn't been released.

                    You're deliberately trying to derail the conversation in a pointless digression. Classic trolling.


                    Originally posted by starshipeleven View Post
                    Did I cite Pokemon Go? Yes I did. I know. That's basic stuff though.
                    Except that was AR in name only, and can't be compared to the capabilities of Apple's ARKit or Google's ARCore. I know you don't care about the distinction - you'll just blindly label everything as "Pokemon Go" for your trolling purposes, but I mention this in case anyone else cares. Folks should lookup the details, if they're interested in the state of the art in phone-based AR.

                    I don't mean to over-sell what these phones can do. IMO, they're still lacking in the sensors department, but the processing power is there to find planes, perform occlusion, estimate and match lighting, and store persistent anchor points.

                    If you want to see just what it's capable of, here's a video from a year ago:


                    The Lego AR stuff is particularly cool. I'll try to find a better video of that.

                    Originally posted by starshipeleven View Post
                    To actually have a decent shot at working reliably in a realistic non-controlled environment I expect NVIDIA Jetson boards
                    First, you don't have any basis for saying this. You can't begin to make a case for the spec of hardware that's needed and why.

                    Second, the distinction is meaningless. Jetson is pretty old, by this point, and equaled or surpassed by modern, high-end SoCs.

                    Originally posted by starshipeleven View Post
                    To have a system that does not make mistakes all over the place in a cluttered environment (which is a requirement for its use by military, rescue and all the other usecases you advocate) you need more processing power than is currently employed.
                    Yeah? if you're so sure, then explain what's needed and why.

                    Originally posted by starshipeleven View Post
                    As it is, it's little more than a toy with limited capabilities.
                    As if you'd know. Listening to you is a bit like listening to a fish trash-talk all the land-dwellers and talk about how bad it is to live out of the water. You have no authority to preach about AR.

                    Originally posted by starshipeleven View Post
                    Still stuck in a religious mindset?
                    No, just the pretentious idea that I want to listen to people who actually know what the hell they're talking about.

                    Anyway, as a bonus, here's a Hololens 2 demo, mostly showcasing the precision of its hand-tracking, as people interact with a virtual keyboard and virtual controls.
                    Last edited by coder; 01 August 2019, 12:44 AM.

                    Comment


                    • #40
                      Originally posted by starshipeleven View Post
                      Does not answer the question
                      Worse. It directly refutes your underlying assertion.

                      You need to find a better outlet for your frustrations with life.

                      Comment

                      Working...
                      X