Originally posted by schmidtbag
View Post
Your behavior and line of thinking is bizarre. How do I have an argument "anymore" if I wasn't really disagreeing with you in the first place?
Me: This is why you're wrong
You: But it means I'm right so we agree
Me: No, your argument is still wrong
You have this annoying tendency to look at things in an all-or-nothing perspective. I don't know how many times I have to tell you that not every game has this problem, and that you don't know how every development studio works. But go ahead, feign expertise - it's totally working out for you.
You have yet to prove how you are all that special. Just because you claim authority, doesn't make you have any. This is the internet, kid - you're nothing here. Having access to a decent workstation and working with 40-250GB chunks of datasets doesn't make you an expert.
In the face of that, your "big" 25 GB database table was simply paltry.
Exactly: virtual memory is a 1990s thing. Today, there isn't much advantage in it so long as you design your system around your workflow properly. RAM is relatively cheap and abundant. If you depend on virtual memory, you're either being way too cheap, you're doing things wrong/inefficiently or you have a very niche situation where perhaps you only need it very temporarily.
No... I didn't, and insisting that doesn't make it true, just like insisting you're an expert doesn't make it true. Deliberately ignoring the complete argument does nothing to make you look competent or intelligent. You pick and choose the parts that seem stupid by themselves, but you ignore the rest.
However, you didn't...
So, why should I elaborate on anything else when you have the attention span and memory of a goldfish?
Once again, you are inadvertently agreeing with me. Kinda amusing.
Now, let's say there are hundreds or thousands of smaller tables rather than one big one. Then, sprinkle in some extra processing where you're not just simply doing select/insert statements. That's what my system looks like.
Says the one with an authority complex.
And how long do you keep that machine? 1, maybe 2 years? Considering it makes up for most of your salary, that seems rather expensive, relatively speaking. You can't ignore peripherals either; after all, this isn't a server!
It's pretty rare for a company to have a looser fist than a government. Spending 500€ extra so you can see more dots would only be agreeable if your boss was clueless.
None of that changes my point; funny how I'm thought to be the strawman.
I find your interpretation of my idea confusing; no wonder you think I know nothing.If you like throwing out trendy psychological terms as if it contributes anything: you've got an extreme case of confirmation bias.
If your software loads in 25GB+ chunks during runtime when (in your case) the data points do not depend on one another, then you're not a good developer.
I can't wait for you to correct me on how clueless I am about this!
I can't wait for you to correct me on how clueless I am about this!
There you go again, only quoting the parts that are convenient for you.
Exactly, so it stands to reason that you don't know as much as you think. HuRr DuRr Dunning-Kruger!!! I provided examples throughout to back up my claims. The only thing you have to back up yourself is "trust me, I'm an expert", over-generalized claims, and a personal anecdote. I asked for more sources and you didn't provide them. Surely, it should be easy if you're that certain.
This isn't the comments section to some consumer electronics blog. Most of the people on here are software developers, hobbyists and students in the field. We even have major kernel developers posting here on occasion. Over here you regularly run into people that can be described as subject matter experts.
That's why I said to do compression level 9... Obviously, you won't see any noteworthy difference doing the same exact method, and I never suggested that.
You are as dense as you want your LIDAR points to be.
Using zlib at level 9 to further compress an already compressed game is a worst-case scenario example that I don't suggest anyone do. The point is, it manages to shave off more than just a couple percent, that shows how much potential there is for more compression. NOT compression using zlib, but methods optimized for specific assets. Y'know, like WebP vs BMP, or FLAC vs WAV. Go with lossy methods and you can save even more.
Seems to me you're a strong case of Dunning-Kruger in this regard.
Comment