Announcement

Collapse
No announcement yet.

Microsoft Teams Is Coming To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Marc.2377
    replied
    I wish they gave us a proper Visual Studio release for linux. But it probably won't ever happen, as VS is too tighly integrated with Windows.

    Guess I'll have to continue to get by with Visual Studio Code and half a dozen extensions; and still fire up a Windows VM when I want to do serious development and integrations testing.

    CLion is made in java, as that's all the Jetbrains guys seem to have their mind stuck to. Plus, it's quite lacking. Code::Blocks does not look good enough to my eyes. I'm not sure how well Qt Creator works without Qt. I have yet to try KDevelop.

    Leave a comment:


  • kelestein
    replied
    hi. Guys first thanks for this post. i follow the steps and i am able to run and receive call on browser on centos 7 chrome browser using below string
    Edge Edge/17.17134 Edge 17

    but when i make call it shows me that its calling but receiving peron didn't receive any call and after call ended he able to see a missed call notification. while he not receive any call notification or popup.

    i will really appreciate if you help me. because i really want to switch to linux centos 7 version.

    Thanks in advance


    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by oiaohm View Post
    This is ignoring history. Sysvinit based distributions has a lot of third party parts in individual distributions you need. When systemd was started systemd lead developer went around attempt to contact all those upstream. Only about 1/4 of those up-streams in fact still had a maintainer. Some of those maintainers were in fact dead physically and no one had noticed. Others had changed jobs where they could not work on open source project any more. Part or the reason why systemd pulled so much into itself was to reduce this hit by bus problem by increasing the number of maintainers with project access.
    Yes, but it only part of problem. Sysv has been too small, too simple and gave no crap even about most basic system management problem. This forced everyone to reinvent the wheel here and there and it performed well below of what anticipated.

    Conctete example: httpd doesn't needs access to shell under normal course of action. It also exposed to network and subject to hackers attacks. Available shell makes attacker's life much easier. If one assembles really efficient sandbox ... it imply shell would not run. Should there be startup problem, good luck to debug this environment. Oh, wait, classic *nix never been designed with existing threats model in mind or so. On other hand systemd would log most kinds of errors and even program output - without forcing me to code all that myself. And it also privileged entity that can do whatever it needs to, so it doesn't hits the walls. So I'd say *nix way got strong points and weak points. I'd rather take sysv init for later - it hardly addresses present day challenges.

    I would recommend a different idea. Start automated testing building individual parts of systemd from systemd source code and getting more distributions making systemd more custom install-able.
    I would agree systemd may need better ways to modularize/customize. However, I can't propose good solution that wouldn't also cause explosive growth of complexity (and therefore turn things fragile and bugged). Automated testing isn't without its limits: even if "unit test" flawless, interaction across borders soon becomes so diverse and numerous all combinations can't be tested in reasonable time. This dooms everyone to have ton of bugs on the edge of units interactions and whole thing would barely work, if at all.

    systemd(collection) and coreutils(gnu) have a lot in common its all about making sure you have enough people who can maintain the code. Yes the animation of systemd eating different parts skips over how many of those parts systemd took in no longer had maintainer responding to bugs. Systemd project lead can be a jerk but its better than main sysvinit project where the project lead was missing for 4 years completely so you post bugs no answer at all. This was before systemd started. Yes this is one of the places where the person who was marked as responsible was in fact dead.
    I guess it can make sense to split out e.g. networking management, time sync, etc... to separate projects. But I can't propose how to modularize core of systemd without jeopardizing what it does. If some system-level arts mad genius can do that, I'm long to see it, come there and show us stupid how to get it right? Oh, btw, just trying to persuade me "I don't need to do X and Y, let's do Z instead" wouldn't do. But historically sysv init and related maintainers had very little to offer beyond that point.

    So one of the questions you have to answer is how are you going to maintain these projects and not end up with a key project bit rotting because there is no one to maintain it. Lot of people don't know how big of a disaster sysvinit world was. Yes people yelling about systemd taking in projects most of that taken was because that project no longer had a maintainer and no one less was stepping up for the job to take care of that project.
    While this concern is somewhat valid, I'm yet to see something better than systemd.

    Systemd should have been a wake up call to work out the maintenance side because what was getting projects eaten by systemd was in fact maintenance problems.
    Systemd is a wake up call outlining far more issues:

    I'm not going to manage my systems like if its 1980s. Its nice to have tools that show overall state, obvious faults, difference from "defaults", allow to list e.g. "all active timers" by 1 trivial command and so on. And if things gone imperfect I would expect some hint on what's wrong. Without coding debug logging in script myself. Its very annoying to do that myself each and every time. I don't get why typical operation like arranging service startup should be as complicated as that.
    Systemd is more or less aware of concepts of package management, "factory defaults", and so on. It got well defined and convenient ways to both supply pre-existing "system defaults" and allow "user" to override that. In a well defined way, where e.g. package manager, etc would not overwrite user's changes. These just stored in dirs that should never be packaged and take precedence over files in "system" dirs. One can even have partial override, listing only differences in their unit file. Nearly impossible to do with sysv except maybe bringing very advanced programming techniques (read VCS).
    Furthermore, reversal to "system defaults" is quite straightforward and changes stored in well defined place. There is also notion of e.g. "first boot" - very handy if we need to "customize" new instance of VM, each hardware unit or whatever.
    I like idea of more structured approach to logging. Systemd is half-way there, but it's better than nothing. At least it got some APIs and these make at least some sense.
    Same for system health assessment. My favorite would be watchdog api, sharing watchdog to whole subtree of processes. Sorry but *nix ppl are noobs at this, "critical processes" aren't even necessarily networked - at which point "*nix way" makes it virtually dark corner, unless one willing to code what systemd does - inventing some homegrown api for i++'th time. Oh, honestly, at least someone seriosuly looked on how computers are used - including demanding areas - and designed more or less reasonable api for that. In a way that not usually interferes with the rest of system operations.

    These "RAS" requirements are quite basic and not even unique to e.g. servers. Say, SBCs doing control and automation tend to have similar requirements. And even desktops can have at least some use of these approaches - especially if used to do e.g. my job, so I could be unable to afford a week to dig into some obscure system oddity and have to recover operational state ASAP instead. It no longer single big computer, one per city, so relevant practices mostly gone obsolete and management paradigms shifted. At which point sysv init grown quite irrelevant to present day state of things.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by DoMiNeLa10 View Post

    Just look at the articles about running Steam games. There's plenty of multilib software compiled against old versions of libraries, and there's plenty of information how to get these binaries to run. As an example, there's libcurl-compat (and a multilib version of it).
    All of that stuff is what makes Arch/Manjaro a great Linux gaming OS. That and being able to easily pull in updates from a project's master because it contains a crucial game fix.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by tildearrow View Post
    Also, I don't think these compatibility packages exist in Arch...
    Just look at the articles about running Steam games. There's plenty of multilib software compiled against old versions of libraries, and there's plenty of information how to get these binaries to run. As an example, there's libcurl-compat (and a multilib version of it).

    Leave a comment:


  • oiaohm
    replied
    Originally posted by tildearrow View Post
    Exactly, and in my opinion this is the number 1 reason why has Linux failed on the desktop.
    Every single library belongs to the system, even ones which tend to break often.
    These should NOT be part of the distribution in any way, but instead be bundled by the application, macOS-or-Windows-way.
    Really there are a few things here that are not exactly facts. Linux Standard Base runtime was the first when we saw not ever single library belongs to the core system. Yes Linux Standard Base kind of died. Interesting enough is the Linux Standard base developments lives on in Flatpak as freedesktop platform. Yes work is under way to allow applications in snap format as well as flatpak to use versioned freedesktop platform instead of being built against x versions of ubuntu.

    Just because a Library is provided by the Distribution linux standard base got the modifications into the dynamic loader by default that you don't have to use them.

    If you use the distribution provided runtime you are agreeing to stay inline with the distribution updates.

    Steam runtimes and the freedesktop platform runtimes are two options for application developers to avoid distribution update issues as much as possible.

    Really the problem here is the idea that every single library belongs to the system idea. This is OS X/Windows/Closed source OS thinking. Valve when they made their steam runtimes was not thinking every single libraries owns to the system.

    Just because someone is hand out 50 dollar notes on the street does not mean you have to go over and take one. If you do go over and take one and get mugged because it was a trap who fault was it?

    I see using distribution libraries no different they have a policy they will update their libraries in alignment with their provided applications. So they tell third party developers they are going to mug them if they don't get their application into the distribution. Yet for some reason people get upset when it happens.

    Linux Standard base now the freedesktop platform started for those that were going to be out of alignment.

    Leave a comment:


  • tildearrow
    replied
    Originally posted by DoMiNeLa10 View Post

    Some dynamically linked software can get in your way, libcurl being easily the worst. At least there are compatibility packages for these.
    Exactly, and in my opinion this is the number 1 reason why has Linux failed on the desktop.
    Every single library belongs to the system, even ones which tend to break often.
    These should NOT be part of the distribution in any way, but instead be bundled by the application, macOS-or-Windows-way.

    However, this is impossible, because libraries like ICU are required by the system, and that is one hell of a library that breaks every quarter year!

    Also, I don't think these compatibility packages exist in Arch...
    Last edited by tildearrow; 15 September 2019, 12:15 PM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by tildearrow View Post

    They must have been statically linked then. This is what I get when running a program that was compiled 4 years ago (on Arch too):

    Code:
    ./soundtracker: error while loading shared libraries: liballegro.so.5.0: cannot open shared object file: No such file or directory
    Some dynamically linked software can get in your way, libcurl being easily the worst. At least there are compatibility packages for these.

    Leave a comment:


  • tildearrow
    replied
    Originally posted by DoMiNeLa10 View Post

    I was able to just run binaries over a decade old on my Arch install, so I think that userspace compatibility is stellar. I didn't have to bother compiling software again.
    They must have been statically linked then. This is what I get when running a program that was compiled 4 years ago (on Arch too):

    Code:
    ./soundtracker: error while loading shared libraries: liballegro.so.5.0: cannot open shared object file: No such file or directory

    Leave a comment:


  • pal666
    replied
    too long thread for subj
    microsoft skype is on linux for many years
    microsoft teams has browser client, i use it from linux

    Leave a comment:

Working...
X