Maintainer has been absent for some time so kernel v6.11 and v6.12 isn’t supported OOTB, to get it to work with kernel v6.11 you need to pull the fix from: !48
Maintainer has been absent for some time so kernel v6.11 and v6.12 isn’t supported OOTB, to get it to work with kernel v6.11 you need to pull the fix from: !48
If I remember correctly the default sudo timeout is set to 5 minutes on Yay, you should be able to increase it to something more reasonable
Additionally you can try and force use amdgpu
rather than radeon, by setting the kernel flags:
radeon.cik_support=0 radeon.si_support=0 amdgpu.cik_support=1 amdgpu.si_support=1 amdgpu.dc=1
Device initalization failed according to the Xorg logs;
dmesg
or journalctl -k
)Question is gonna be whether they can scale their DUV process, or if they have to get to EUV (without ASML) the next couple of years
No bios update, but you most likely received both microcode updates (which is what will fix/mitigate the Intel issue, the bios is only to ensure everybody gets the microcode update) and firmware updates (from linux-firmware
)
Of course non-mainlined (i.e. not in the linux kernel) firmware is a bit more iffy, luckily it’s getting slowly better with OEMs using fwupd
for those scenarios
Could it be an issue with the Nvidia drivers, boot with acpi=off and then install the (proprietary) nvidia drivers and then reboot to see if it boots normally now?
Running:
swaymsg for_window "[app_id=mpv] opacity 0.5"
Works as expected on my end, are you missing just executing for_window
?
Note, you can also add multiple rules in the same execution, e.g.
for_window {
[app_id=mpv] opacity 0.85
[app_id=LibreWolf] opacity 0.85
}
Also, note that app_id
of LibreWolf is capitalized in that manner.
You can get that information [app_id, shell etc] by running swaymsg -t get_tree
Feel like most people still do the scripting in Bash for portability reasons, and then just run Fish as the interactive shell
Nice, then you should be able to run vkcube
to verify whether your GPU is activated properly.
You can do several “iterations” here as well.
mangohud vkcube-wayland
- Does it use your Nvidia GPU?mangohud vkcube
- Does it use your Nvidia GPU?If Step 2 nor 3 shows your Nvidia GPU you can try and force it with:
mangohud vkcube-wayland --gpu_number 0
Start with the basics, do you see your Nvidia GPU pop up when using vulkaninfo --summary
?
If it doesn’t pop up, verify that you have the correct vulkan ICD files in:
ls /usr/share/vulkan/icd.d/
There you should have nvidia_icd.json, nvidia_layers.json
.
If that’s missing, you’re missing the nvidia-utils part of the driver.
If they are there, but it still don’t show in your vulkaninfo sumary, you could try to load the nvidia driver manually; modprobe nvidia
, also check the kernel logs journalctl -k
or dmesg
and search for nvidia
to see whether the driver got loaded correctly?
Breaking Linux every week or every other week? That’s almost impressive!
I’ve used: User Agent Switcher
Successfully using;
teams.microsoft.com
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36
They support meetings in Firefox so it’s a bit weird why they would block calls… They’re effectively the same thing
Additionally, if you change your userAgent to be Chrome things are working pretty good in Firefox as far as I’ve tried it (not too extensively)
From: This thread
Seems like you can try and debug the execution by running switcherooctl launch *application*
, which should (manually) do the same as when you right click and click Launch with dedicated GPU
, because I think Mint is using switcheroo, same as Gnome is.
But would then hopefully log some debug information for you in the terminal itself
I’ve tried running steam with the dedicated GPU option
What exactly are you running to choose the dedicated vs integrated GPU?
I also get the freezing issue without running with the dedicated GPU when I launch steam but found that launching directly to the steam settings window from the menu reduces the chances of freezing.
Hmmm, whenever this happens, it might be worth looking at the kernel logs, see if something crashes. You can check them with either
journalctl -k -xef
or dmesg
Kernel: 5.15.0-82-generic
In general it’s recommended to stay on newer kernels/mesa when using the open source GPU drivers, could be worthwhile trying to update that (think there’s a PPA you can pull from)
If all nodes are connected through ethernet to each other (or at least one common node) you could go for OpenWRT’s ‘Dumb AP’ setup as well
Edit: Already mentioned here; https://feditown.com/comment/1980836