Khronos Officially Releases OpenXR 1.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Pepperoni
    replied
    Just to give some nicer voice in this incel mess..

    I've been waiting for OpenXR 1.0 release for a long time now and I'm pleased that it is finally here. I'll definitely start work on supporting VR using OpenXR in my engine and start doing applications using it.

    I own a Vive and I love it, it looks very dev'y with all of those lighthouse sensor pointing everywhere, works well with glasses and has a decent screen. It's good for gaming, exercise (Beat Saber gets you really sweaty) and media consumption (watching movies in Bigscreen with a friend is really great).

    Leave a comment:


  • coder
    replied
    Originally posted by starshipeleven View Post
    I'm (also) saying that to get that level of contextual awareness you and others cite, you would need stupendous or even outright magic levels of sensor equipment and data-processing working on the background, which I don't see coming in the near future (decades).
    I might respect your opinion, if you had any real experience developing or using the latest AR tech.

    Originally posted by starshipeleven View Post
    I don't want to know the amount of false positives this system gives. In a natural environment there are tons of "subtle changes" every day due to random events and animals.
    If a clump of bushes suddenly appeared that wasn't there on the last patrol, that might be noteworthy. It's just an idea, but within the realm of plausibility.

    Originally posted by starshipeleven View Post
    This is a very dangerous double-sided blade. You need to be relaying information about airstrikes and friendly unit position all over the damn place.
    I thought about this, but battlefield communications systems always have this problem, so you've got to have secure end-to-end comms no matter what. One opportunity you have with AR is the ability to get a good, close look at the iris (or even retina) of the wearer. This can be used for biometric security, in ways that even radio cannot.

    Originally posted by starshipeleven View Post
    Their commander (for any number of people, even a single squad) cannot handle so much data alone.
    What's funny is that you only see this adding to the amount of data, rather than as a preprocessor to help fuse and filter data, so that the information that's presented is relevant, important, and easily digestible.

    Originally posted by starshipeleven View Post
    This can be at best sent to analysts for post-processing, a large team of people looking at different angles of each person's 3D feed in realtime, or you need some serious AI program to deal with it and provide useful info in real time.
    I imagine the data from the feeds being integrated into a unified battlefield map - not that the commander would dive into the 3D feed of a single soldier, though he could.

    Leave a comment:


  • coder
    replied
    Originally posted by starshipeleven View Post
    Does not answer the question
    Worse. It directly refutes your underlying assertion.

    You need to find a better outlet for your frustrations with life.

    Leave a comment:


  • coder
    replied
    Originally posted by starshipeleven View Post
    The prototype with the best capabilities, used in the first preview was the one tethered to a PC. Later ones were more limited, but portable.

    Self-contained ones have glitches https://www.theverge.com/2016/4/1/11...adset-hands-on
    Dude, I've got news for you: pre-release software always has issues, which is why it hasn't been released.

    You're deliberately trying to derail the conversation in a pointless digression. Classic trolling.


    Originally posted by starshipeleven View Post
    Did I cite Pokemon Go? Yes I did. I know. That's basic stuff though.
    Except that was AR in name only, and can't be compared to the capabilities of Apple's ARKit or Google's ARCore. I know you don't care about the distinction - you'll just blindly label everything as "Pokemon Go" for your trolling purposes, but I mention this in case anyone else cares. Folks should lookup the details, if they're interested in the state of the art in phone-based AR.

    I don't mean to over-sell what these phones can do. IMO, they're still lacking in the sensors department, but the processing power is there to find planes, perform occlusion, estimate and match lighting, and store persistent anchor points.

    If you want to see just what it's capable of, here's a video from a year ago:


    The Lego AR stuff is particularly cool. I'll try to find a better video of that.

    Originally posted by starshipeleven View Post
    To actually have a decent shot at working reliably in a realistic non-controlled environment I expect NVIDIA Jetson boards
    First, you don't have any basis for saying this. You can't begin to make a case for the spec of hardware that's needed and why.

    Second, the distinction is meaningless. Jetson is pretty old, by this point, and equaled or surpassed by modern, high-end SoCs.

    Originally posted by starshipeleven View Post
    To have a system that does not make mistakes all over the place in a cluttered environment (which is a requirement for its use by military, rescue and all the other usecases you advocate) you need more processing power than is currently employed.
    Yeah? if you're so sure, then explain what's needed and why.

    Originally posted by starshipeleven View Post
    As it is, it's little more than a toy with limited capabilities.
    As if you'd know. Listening to you is a bit like listening to a fish trash-talk all the land-dwellers and talk about how bad it is to live out of the water. You have no authority to preach about AR.

    Originally posted by starshipeleven View Post
    Still stuck in a religious mindset?
    No, just the pretentious idea that I want to listen to people who actually know what the hell they're talking about.

    Anyway, as a bonus, here's a Hololens 2 demo, mostly showcasing the precision of its hand-tracking, as people interact with a virtual keyboard and virtual controls.
    Last edited by coder; 01 August 2019, 12:44 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by starshipeleven View Post
    Lying about what it can actually do.
    And when did I do that?

    Originally posted by starshipeleven View Post
    And I'm shutting down lies so you can focus on something that could make it really become real one day.
    Like a drunken pervert at a park, on a nice Sunday afternoon, the only problem you're solving here is your own boredom.

    Please leave us alone.

    Leave a comment:


  • coder
    replied
    Originally posted by log0 View Post
    Funny that you are talking about minimal latency. The signal roundtrip time to your remote operated fighter jet will be like what 200-500 milliseconds? Then there would also be the issue of jamming or spoofing. (Semi)autonomous unmanned vehicles are where the future is heading.
    I agree that semi-autonomous is the way to go. However, that doesn't actually rule out the possibility of using VR & related tech in command and control of these craft.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by coder View Post
    The benefit that AR can provide is contextual awareness. It can provide you with relevant and important information for your specific context.
    I'm (also) saying that to get that level of contextual awareness you and others cite, you would need stupendous or even outright magic levels of sensor equipment and data-processing working on the background, which I don't see coming in the near future (decades).

    AR is just a display technology. Developing it now for those uses is decades too soon.

    Again, using the patrol example, it could also highlight subtle changes that occurred since previous patrols (possibly even by different soldiers), so you can see where potential IEDs might be hidden or where foes might be lying in ambush.
    I don't want to know the amount of false positives this system gives. In a natural environment there are tons of "subtle changes" every day due to random events and animals.

    And imagine being able to see where an incoming airstrike or artillery is going to hit, so you can see if it's on-target or if you're too close & need to take cover.
    Imagine being able to see the location of "friendlies", so you can better coordinate with them, as well as avoiding friendly-fire.
    This is a very dangerous double-sided blade. You need to be relaying information about airstrikes and friendly unit position all over the damn place. Enemies could tap into it because of vulnerabilities at any stage of your C&C system (i.e. all the way back to home base), plus of course in the field, and/or if they steal/hack your men's AR equipment.

    Plus, with soldiers kitted out with multiple cameras and full 3D tracking, they can relay the data back to a command post, so their commander can see exactly where they are, what they see, and what's around them. All in an integrated, multi-asset battlefield display, that he might even be viewing in VR to see in 3D.
    Their commander (for any number of people, even a single squad) cannot handle so much data alone.
    This can be at best sent to analysts for post-processing, a large team of people looking at different angles of each person's 3D feed in realtime, or you need some serious AI program to deal with it and provide useful info in real time.
    Last edited by starshipeleven; 31 July 2019, 06:56 AM.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by coder View Post
    Here's the current Wikipedia definition:
    Does not answer the question

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by coder View Post
    You don't know what you're talking about. Hololens was self-contained
    The prototype with the best capabilities, used in the first preview was the one tethered to a PC. Later ones were more limited, but portable.

    Self-contained ones have glitches https://www.theverge.com/2016/4/1/11...adset-hands-on

    One of us browsed ESPN, the other checked out The Verge, and both of us stood there, staring at the same wall, looking at different virtual web pages. Irritatingly, when you move your head just a little bit, the browser window in front of you moves with it. Score one for virtual reality headsets.

    and are otherwise immature https://www.geekwire.com/2019/holole...ality-headset/

    The demos I got to see where pretty basic, as the device is not yet a finished product. I was able to read GeekWire on a web browser pinned to the wall. I inspected a windmill and turbine, both of which I could rotate, re-size and pin to me, so that wherever I walked around the room the image would follow me.


    Plus there was a hands-on where I remember reviewers were complaining about a bridge hologram that was glitching through people and other complex objects.

    Also note the very basic environment with straight and clean lines in all demos of it you find. In a more natural (and cluttered) environment it would glitch.

    Today, you can download and run AR apps on regular iPhones and many Android phones.
    Did I cite Pokemon Go? Yes I did. I know. That's basic stuff though.

    Guess what? It didn't even ship in 2015 (nor was it planned to), so pre-release software issues are understandable. But the hardware of the original Hololens was finalized back then.
    And was insufficient. It's not a matter of software bugs, but of computing power.
    Current Hololens 2 is using Snapdragon 950, which is better but still inadequate.

    To actually have a decent shot at working reliably in a realistic non-controlled environment I expect NVIDIA Jetson boards or similar car-grade hardware as a bare minimum, a high-end mobile SoC won't cut it.

    That was my point - you seemed to imply there was a gap in hardware capabilities needed for AR,
    Yes. Maybe it's just a matter of standards. To have a system that does not make mistakes all over the place in a cluttered environment (which is a requirement for its use by military, rescue and all the other usecases you advocate) you need more processing power than is currently employed.

    As it is, it's little more than a toy with limited capabilities. Which is kind of bad for AR that is supposed to be used out in the field, not in a controlled environment.

    By what authority?
    Right or wrong don't come from a higher authority. Still stuck in a religious mindset?

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by coder View Post
    Because that's how you seem to treat it - like you're Richard Dawkins, here to show us the light and cure us of it.
    I'm just a humble messenger.

    Lying about what? What problem are you solving?
    Lying about what it can actually do. And I'm shutting down lies so you can focus on something that could make it really become real one day.

    Leave a comment:

Working...
X