Originally posted by ShFil
View Post
Announcement
Collapse
No announcement yet.
After ~70% FPS Boost For Zink, The OpenGL-on-Vulkan Code Is ~50% The GL Native Speed
Collapse
X
-
- Likes 1
-
Originally posted by EmbraceUnity View PostCorrect me if I'm wrong, but the main benefit of Zink is that if successful it would allow all driver and hardware development to eventually forget about complex OpenGL conformance, and just focus on simple Vulkan primitives. All standards like OpenGL become hardware-agnostic software issues.Last edited by curfew; 27 September 2020, 11:13 PM.
Comment
-
Originally posted by ShFil View Post(For example drivers of vk/dx12/mantle allow to use opportunity of having more than one core to achieve more draw calls.)
Comment
-
Originally posted by curfew View PostOpenGL does not allow for that because it is impossible to be both multithreaded and in compliance with OpenGL. Therefore Vulkan's multithreading is out of reach when emulating OpenGL.Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
To be horrible correct Opengl standard does not really have any clear statements on multithreading.
Its fairly much left down to the platform to write platform particular rules. So technically zink could add platform unique features to allow more Vulkan multi-threading in opengl for aware applications and still be in compliance with OpenGL
Opengl compliance is just passing the OpenGL test suite that is only going to test out single threaded stuff.
It not impossible to be in compliance with Opengl and multithreaded but the fact this equals having own implementation features means existing applications can need to be hand picked or only for new applications to use multi-thread features.
- Likes 2
Comment
-
Originally posted by oiaohm View PostIt not impossible to be in compliance with Opengl and multithreaded but the fact this equals having own implementation features means existing applications can need to be hand picked or only for new applications to use multi-thread features.
Exactly how that should be done I have no clue, but it's not unheard of to have old ideas finally come back again because now we have the technology to implement such ideas. Fifteen years ago the idea of dedicating a single computer core to doing OS stuff was seen as laughable, today it is increasingly common on 8+ core systems.
Comment
-
Originally posted by curfew View PostEmulation layer cannot be as fast as the native implementation that properly utilizes the underlying hardware.
- Likes 2
Comment
-
Originally posted by curfew View Postcomparable to how Intel is already emulating some ancient x86 instructions in their processors in exchange for simplifying / optimizing the hardware architecture.
- Likes 2
Comment
-
Originally posted by mangeek View Post
I believe this started back in the 1990s with the Pentium Pro/i686. My recollection was that the i586 and below were straight-up CISC processors, but the i686 sort of decoupled things and -emulated- a CISC CPU in microcode while everything really ran on a RISC-ish core with microcode.
- Likes 1
Comment
-
Originally posted by Amaranth View Post
I think you're mixing up micro-ops and microcode. There is dedicated hardware in the chip for converting most x86 instructions in to micro-ops, microcode is a much slower programmable emulation of instructions.
- Likes 2
Comment
Comment