Closed Bug 1769845 Opened 2 years ago Closed 2 years ago

Virtually entire firefox interface non-functional on work Windows 10 VM in nightlies after 2022-05-10

Categories

(Core :: Security: Process Sandboxing, defect, P1)

Firefox 102
defect

Tracking

()

VERIFIED FIXED
103 Branch
Tracking Status
relnote-firefox --- 101+
firefox-esr91 --- unaffected
firefox100 --- unaffected
firefox101 + verified
firefox102 + verified
firefox103 + verified

People

(Reporter: bugs, Assigned: bobowen)

References

(Blocks 1 open bug, Regression)

Details

(Keywords: regression)

Attachments

(3 files, 1 obsolete file)

I routinely use nightly kind of as a dogfood exercise.
This started on updating Monday. I switched to backup Stable profile to get work done, even though it has fewer of my links and settings.
Tuesday's update did not resolve the problem, which also happens in clean profiles, so I decided to file a bug.

Downloading the nightly for 2020-05-15 fixed the issue. Upon allowing automatic update to update to today, 2020-05-17, it immediately started failing again.

Symptoms are that main pane is blank, including if trying to access settings, logins, dev tools etc. Browsing in menu works, but clicking on browser console (ctrl-shift-J) also opens a blank window.

URL bar is interactive, and autosuggest works in search and url bars, but clicking on anything does nothing.

Apologies to anyone attempting an early triage. The range above is incorrect. I belatedly noticed I'd downloaded a nightly from 2022-05-07 which works

2022-05-15-9-45 fails so I'll have to go back a few days. Probably due to the weekend.

2022-05-09-19-04-29 WORKS
2022-05-10-09-55-38 FAILS

Summary: Virtually entire firefox interface non-functional on work Windows 10 VM → Virtually entire firefox interface non-functional on work Windows 10 VM in nightlies after 2022-05-10

i'm seeing similar in https://bugzilla.mozilla.org/show_bug.cgi?id=1770098 but on a windows 11 device, not a VM. seeing it in firefox 100 and up.

Are you able to use mozregression to get to the exact commit that broke it? That would be immensely helpful.

Flags: needinfo?(bugs)

So... I'm mildly confused. I thought the purpose of mozregression was, essentially, to automate the process of download and running nightlies, which I just completed - that's how I got those GOOD/BAD nightlies from the prior comment.

I would have linked to the 2 commits in about:buildconfig but I can't even get the one from the morning of the 10th due to general brokenness. But surely you could link it for me :)

Anyway, here's the commit from the about:buildconfig for that last-GOOD from the night of the 9th.
https://hg.mozilla.org/mozilla-central/rev/89b0f422a716f208ec0e0e85aaa5ac7c0b759ee9

Flags: needinfo?(bugs)

I thought the purpose of mozregression was, essentially, to automate the process of download and running nightlies, which I just completed - that's how I got those GOOD/BAD nightlies from the prior comment.

It can and should by default proceed from a date range to eventually drill down into the individual commit level, not just the Nightly release. There can be hundreds of changes per Nightly so it doesn't necessarily narrow down things down enough if we just get the date range (we have another report of this problem and approximate date range but are still baffled at the cause).

i.e. once mozregression knows the range of Nightlies, it will move on to our CI and start bisecting all the Firefox changes made that day.

Flags: needinfo?(bugs)

I see. Are those per-commit builds exposed anywhere? I can just try downloading them manually...

I'm trying not to install too much random junk on the work VM. At least Firefox is approved, and I justify nightlies for advance testing of browser changes.

Also, can you get me the changeset for the morning of the 10th, the one I can't access? The one the Windows x64 would have been built off of. I guess for starters I could eyeball that range for anything suspicious. Say, webrender "hm, what happens if I turn off webrender" that kinda thing.

Flags: needinfo?(bugs)
See Also: → 1769414

19:04:29, rev 89b0f422a716f208ec0e0e85aaa5ac7c0b759ee9
09:55:38, rev 58a6343ab33d9ca296fe30e66757301d1cda705d

https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=89b0f422a716f208ec0e0e85aaa5ac7c0b759ee9&tochange=58a6343ab33d9ca296fe30e66757301d1cda705d

I see. Are those per-commit builds exposed anywhere? I can just try downloading them manually...

Yeah, but believe me that's going to be a big nuisance since you'd have to chase them down throughout our CI.

I'm trying not to install too much random junk on the work VM. At least Firefox is approved, and I justify nightlies for advance testing of browser changes.

mozregression is available as a commandline python script on PIP, if that helps. It will unpack the Firefoxes into a tempdir so it's not installing anything.

Also, can you get me the changeset for the morning of the 10th, the one I can't access? The one the Windows x64 would have been built off of. I guess for starters I could eyeball that range for anything suspicious. Say, webrender "hm, what happens if I turn off webrender" that kinda thing.

See above.

I'm worried about bug 1768014 because it moves around things in process startup and returns an error if things go wrong (instead of crashing, which would leave a trace).

So, if mozregression can't be used, perhaps you can try:

292df8ed886d
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/W1zTxeCIT8SHQBZZsgJBcQ/runs/0/artifacts/public/build/install/sea/target.installer.exe
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/W1zTxeCIT8SHQBZZsgJBcQ/runs/0/artifacts/public/build/target.zip

53032d712512
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/GtZrIMQ0T52FW55gfPcrmQ/runs/0/artifacts/public/build/install/sea/target.installer.exe
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/GtZrIMQ0T52FW55gfPcrmQ/runs/0/artifacts/public/build/target.zip

And check which of these builds work, or is broken, or both work or are broken.

292df8ed886d WORKS
53032d712512 FAILS

BTW, one symptom I'm starting to notice is the semi-unresponsive firefox also doesn't shut down properly.. on starting working firefox with same profile, it says that firefox is already running and prompts to force a close.

On a couple of occassions I end up with a shutdown crash.

I don't know if this relates in any way btw, but I reported a bug 3 months ago with Firefox Nightlies crashing on attempt to print anything due to Cylance.

Could this be related to Cylance mucking about in firefox process plus this enhanced lockdown?

Hum, based on that PASS/FAIL, I guess I should mark this as blocking bug 1768014 ? or is that rude.

Oh. I guess I can't that one is closed oups ☺ - neeevermind. I'll leave the sausage making to y'all.

or is that rude.

No, that is fine. There's also regressed by, which may be appropriate here.

Could this be related to Cylance mucking about in firefox process plus this enhanced lockdown?

Maybe. We have 2 other reports with similar problems from users who are using Windows Defender with custom Exploit Protection settings.

Regressed by: 1768014

Set release status flags based on info from the regressing bug 1768014

:bobowen, since you are the author of the regressor, bug 1768014, could you take a look?
For more information, please visit auto_nag documentation.

Flags: needinfo?(bobowencode)
Has Regression Range: --- → yes

Here's a try push with the win32k lockdown policy passed on the command line, instead of directly into memory:
https://treeherder.mozilla.org/jobs?repo=try&revision=7615e961db6c24d7613d88ed826b3c346145af46

Flags: needinfo?(bobowencode)

Hi, here's the installer and zipped build from that try push, which changes one of the things that seems the most obvious for causing a problem here.
Would you mind testing for us:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/BVMTeR0oSvyd101nuNrIeQ/runs/0/artifacts/public/build/install/sea/target.installer.exe
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/BVMTeR0oSvyd101nuNrIeQ/runs/0/artifacts/public/build/target.zip

Flags: needinfo?(bugs)
Assignee: nobody → bobowencode
Status: NEW → ASSIGNED

This transferred sandbox mitigations directly into child process memory, which
may have caused issues with some security software.

Depends on D146930

(In reply to Bob Owen (:bobowen) from comment #19)

Hi, here's the installer and zipped build from that try push, which changes one of the things that seems the most obvious for causing a problem here.
Would you mind testing for us:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/BVMTeR0oSvyd101nuNrIeQ/runs/0/artifacts/public/build/install/sea/target.installer.exe
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/BVMTeR0oSvyd101nuNrIeQ/runs/0/artifacts/public/build/target.zip

As a test I enabled the 3 settings in Microsoft Exploit Protection mentioned here: https://bugzilla.mozilla.org/show_bug.cgi?id=1770098#c15 I had previously found that those 3 settings blocked Firefox from working with win32k lockdown enabled. This caused the build provided to fail to launch with "The application was unable to start correctly (0x80000003). Click OK to close the application". If I turned off those 3 settings, the app starts properly.

(In reply to tquan from comment #22)
...

As a test I enabled the 3 settings in Microsoft Exploit Protection mentioned here: https://bugzilla.mozilla.org/show_bug.cgi?id=1770098#c15 I had previously found that those 3 settings blocked Firefox from working with win32k lockdown enabled. This caused the build provided to fail to launch with "The application was unable to start correctly (0x80000003). Click OK to close the application". If I turned off those 3 settings, the app starts properly.

Thanks for testing.

I fetched BVMTeR0oSvyd101nuNrIeQ but it doesn't seem to have changed anything. The comment mentions commandline - is there something I should be passing to disable something?

Flags: needinfo?(bugs)

(In reply to nemo from comment #24)

I fetched BVMTeR0oSvyd101nuNrIeQ but it doesn't seem to have changed anything. The comment mentions commandline - is there something I should be passing to disable something?

No it is just the way I am passing information to the child process, thanks for testing though.

Ok, well, they were proposing a blacklist for Cylance in bug 1756190 - maybe once that is added this problem will just go away ☺

I'm going to land this anyway, because it gets rid of a change to the chromium code.
I'll look into blocking cylance versions in bug 1756190.

Pushed by bobowencode@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/b34e482b9971
p1: Use command line to pass whether win32k is locked down in policy. r=handyman
https://hg.mozilla.org/integration/autoland/rev/392c3e0b513c
p2: Back out changeset 6afde8456771. r=handyman
Keywords: leave-open

Do you have any new crashes in about:crashes that you think might be associated with this?

Flags: needinfo?(bugs)

The only crash I can periodically trigger is that on closing the non-functional Firefox, it seems to stay hung in background. On starting another Firefox it prompts me to kill it off. Once in a while, that results in a crash error, although I don't know if it'll be that useful to you...

Here is one from the 19th.

https://crash-stats.mozilla.org/report/index/2ade5ce9-c746-4886-b28a-aca220220519

Flags: needinfo?(bugs)

Oh... and Firefox still 100% reliably crashes on print preview, probably due to Cylance.. I linked to that in the other bug, I can make you as many of those crashes as you like, in any version from the past few months.

Component: General → Security: Process Sandboxing
Product: Firefox → Core

FWIW, This is still going on as of today's nightly (2022-06-01), so clearly Cylance hasn't been banned yet (if indeed that is the problem).

I'm currently using the nightly from the evening of May 9th which was the last functioning one, and just repeatedly clicking dismiss on those irritating update popups.

Not sure how many users are affected (by this instance of the problem).

Severity: -- → S3
Priority: -- → P2

(In reply to nemo from comment #34)

FWIW, This is still going on as of today's nightly (2022-06-01), so clearly Cylance hasn't been banned yet (if indeed that is the problem).

I'm currently using the nightly from the evening of May 9th which was the last functioning one, and just repeatedly clicking dismiss on those irritating update popups.

I sympathize more than you know
[bug 1769414] (https://bugzilla.mozilla.org/show_bug.cgi?id=1769414).

(In reply to Gian-Carlo Pascutto [:gcp] from comment #35)

Not sure how many users are affected (by this instance of the problem).

In my experience, I fear that this is going to affect a lot more users soon
(https://bugzilla.mozilla.org/show_bug.cgi?id=1769414#c38)

Interesting. I didn't spot yours when I filed mine, and I did hunt around a bit.

Are you using Cylance too?

Set release status flags based on info from the regressing bug 1768014

(In reply to nemo from comment #38)

Interesting. I didn't spot yours when I filed mine, and I did hunt around a bit.

Are you using Cylance too?

No Cylance. I use Eset Security, which according to Kagami :saschanaz can affect Firefox.
But I have completely disabled and even uninstalled (and thoroughly cleaned) Eset and made no difference.

Did this change make it to stable? My coworkers are reporting broken firefox too. Confirmed Firefox 101.0 stable is also acting the same way.

Yes, I have seen complaints for stable as well, but I have been focusing more on Beta channel, for better or worse, as a precursor of things to come, maybe?

Unfortunately or fortunately I have been preaching (to the choir) about Firefox Beta to every living soul I know,
and I have been getting backlash from everyone I know about it for about 10+ days now or more.
You see, I recommended Firefox (and Beta channel in particular), because that was supposed to be the holy grail of browsers
(non-chromium-/non-chrome-based, regularly updated and, on top of that, stable enough), (sometimes even Dev channel, yeah, I know, Beta in a different skin, at least for a while now, but anyway),
that was warranted to be working reliably in an efficient and reliable manner, both for personal and for professional purposes.
Both in desktop/laptop/Win1x/MacOS and in mobile as well.

But lately I have been proven wrong in so may ways (luckily, the mobile version is unaffected, for now...).
And I have been dealing with so many frustrated users, I lost count.

The disappointment is genuine, both for the users that count on me, and for the guy who recommended the browser (yours truly).
Anyway, I have been recommending to stay in most compatible version for a while now (something like build 2022.05.12 for the Beta channel)
and I am seriously contemplating Vivaldi (Chromium based, but not entirely googly, plus those Opera guys have proven themselves before) from now on,
partly because I seem to not have been taken seriously by the FF devs, and partly because I fear this might take forever before it gets addressed and/or resolved (it's been 20+ days and counting....).

And that is really sad, because all my coworkers and people I know are reporting the same issue.
And because I, personally, have had a really hard time switching browsers. A real hard time!!
We're talking about first time in two decades (or more).

Anyway, nemo, I really hope you are coping better than me, because I have lost face, and I have lost my favorite browser as well.

Thanks both for continuing to add more details here.

I've just created a build with all of the changes trying to address bug 1768014 backed out, would you mind trying it out.
You might want to choose a custom install and use a different path:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/GFy8ruUqQ5WSsP38cw6bNg/runs/0/artifacts/public/build/install/sea/target.installer.exe

Also, I couldn't spot from the comments if either of you had tried disabling win32k lockdown (to make sure it is related).
So could you also try the following in current Nightly or Release (wherever you have this issue):

  • navigate to about:config and set security.sandbox.content.win32k-disable to false
  • Restart the browser and see if the problem is still there
Flags: needinfo?(jupiters02)
Flags: needinfo?(bugs)

After disabling the sandbox the latest nightly works again.

Flags: needinfo?(bugs)

I downloaded and unpacked https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/GFy8ruUqQ5WSsP38cw6bNg/runs/0/artifacts/public/build/target.zip (it's a lot easier to keep track of all these builds by unpacking them to a set of folders than using the installer)

It works perfectly with sandbox re-enabled.

I can confirm both cases, exactly like nemo.
CI build works fine, regardless of the win32k sandbox setting.
And latest Nightly build 20220604092742 works fine with sandbox disabled (and goes belly up with sandbox enabled).

Flags: needinfo?(jupiters02)

Thanks both.
Given that I don't have definitive proof (no reproduction) that the changes in bug 1768014 have fixed that issue and nobody that I'm aware of reported the issue in Fx100. I'm going to back out those changes, because clearly they are causing issues even if we can't see why or reproduce.

I think a similar patch would be good, but I'll make sure I ask you both to test any changes. I might ask you to test some partial changes to try and identify the problem.

jupiters02: by the way, if any of your co-workers are able to use windbg to try and see why the content processes are crashing/failing to start that would be amazing.

All my coworkers are reporting this as well now, so I suspect at least over here it is "Cylance Protect" so if you want to try installing that on your machine, I bet you could reproduce what is happening on our systems..

These changes have caused some people to have non-functioning browsers.

(In reply to Bob Owen (:bobowen) from comment #47)

Thanks both.
Given that I don't have definitive proof (no reproduction) that the changes in bug 1768014 have fixed that issue and nobody that I'm aware of reported the issue in Fx100. I'm going to back out those changes, because clearly they are causing issues even if we can't see why or reproduce.

I think a similar patch would be good, but I'll make sure I ask you both to test any changes. I might ask you to test some partial changes to try and identify the problem.

jupiters02: by the way, if any of your co-workers are able to use windbg to try and see why the content processes are crashing/failing to start that would be amazing.

Latest Nightly Build 20220605213032.
New, clean profile, security.sandbox.content.win32k-disable=true (default).
Nightly is unusable, displays no content, hangs at shutdown, etc.

I am unable to reproduce, but for a short time, while WinDbg was running but not debugging Firefox, and even after closing WinDbg,
Firefox was actually working normally, using any profile, with or without messing with the win32k sandbox setting, exiting gracefully at shutdown and opening up normally again.
But a while (maybe 20 minutes) later, with no changes in Firefox or the system in the meantime, we are back to misbehaving Firefox, as above.

https://we.tl/t-vKtmGtxhCR

Had a similar report: bug 1772470.
I can reproduce this one (cylance protect doesn't appear to have a trial version unfortunately), appears to be down to running with win7 compat mode.

I don't suppose either of you are running with win7 compat mode (or maybe things you have installed are doing this)?

Flags: needinfo?(jupiters02)
Flags: needinfo?(bugs)

I am definitely not running firefox (or anything else) in win7 compat mode, at least intentionally, and I am not aware of anything installed that would force it.
I could be wrong though, how do I check that?
I just checked firefox.exe, and in fact every executable in firefox folder, and none of them is in compat mode.

On a side note, I have disabled (renamed)
"%WINDIR%\System32\CompatTelRunner.exe"
because it was eating away CPU cycles in the past.

Flags: needinfo?(jupiters02)

Ditto on not running anything in compat mode intentially. My familiarity with Windows is relatively limited so it would be hard for me to know if there were secret global settings to do this, but when I right clicked on Firefox Nightly in Task Manager and chose Properties, Compatibility was not checked.

Flags: needinfo?(bugs)

Thanks again.
Given that we now know of at least one way that this assertion can trip, I'm going to guess that something else is causing similar issues here.
Just pushed a try build with that removed.

That build works fine.

Flags: needinfo?(bugs)

So this fix will not be backported to 101 based on that 103 status flag? I just need to let my coworkers know.. I was telling them their firefoxen would probably work again in a few days.

I mean, unless I guess the cylance dll blacklist in the other bug ends up backported to 101.

If so, the only fix for running firefox for a couple of months I suppose will be running nightlies?

Attachment #9279803 - Attachment is obsolete: true

Running in Windows 7 compat mode can currently trigger this.

... apologies for the bug spam... the 103 status was for the cylance print crash blacklist - I'm getting emails from both. Also, I probably shouldn't try guessing at what your flags mean. But it'd be nice to know, regardless. Will this be in 101?

(In reply to nemo from comment #58)

So this fix will not be backported to 101 based on that 103 status flag? I just need to let my coworkers know.. I was telling them their firefoxen would probably work again in a few days.

I mean, unless I guess the cylance dll blacklist in the other bug ends up backported to 101.

If so, the only fix for running firefox for a couple of months I suppose will be running nightlies?

I think I can fairly confidently say we'll get this uplifted to Beta pretty quickly.
Might even make it to 102.0b5, which gets built tomorrow.

I wouldn't think that this on its own would drive a dot release for 101, but they might well take it if something else did.

[Tracking Requested - why for this release]:
Very simple fix removing a currently invalid release assertion affecting a fair number of users.

Pushed by bobowencode@gmail.com:
https://hg.mozilla.org/integration/autoland/rev/6ff6d97c2ec3
p3: Remove release assertion in IsWin32kLockedDown. r=handyman
Blocks: 1772857
Keywords: leave-open

Comment on attachment 9279880 [details]
Bug 1769845 p3: Remove release assertion in IsWin32kLockedDown. r=handyman!

Note other patches in bug are already in Beta.

Beta/Release Uplift Approval Request

  • User impact if declined: Some users will continue to have a none functioning browser.
  • Is this code covered by automated tests?: No
  • Has the fix been verified in Nightly?: No
  • Needs manual test from QE?: No
  • If yes, steps to reproduce:
  • List of other uplifts needed: None
  • Risk to taking this patch: Low
  • Why is the change risky/not risky? (and alternatives if risky): Simple removal of a currently invalid release assertion.
  • String changes made/needed: None
  • Is Android affected?: No
Attachment #9279880 - Flags: approval-mozilla-beta?

(In reply to Bob Owen (:bobowen) from comment #56)

OK that build has finished, if you wouldn't mind another test:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/CS1COLXFQOupP1mdRVjGMw/runs/0/artifacts/public/build/install/sea/target.installer.exe
Zipped version:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/CS1COLXFQOupP1mdRVjGMw/runs/0/artifacts/public/build/target.zip

I can confirm. That nightly build works fine, with win32k sandbox setting in default/true.
I am also getting a lot of questions, so would very much like to know if this is going to be backported to Beta/Dev and Nightly builds.

Flags: needinfo?(jupiters02)
Status: ASSIGNED → RESOLVED
Closed: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → 103 Branch

(In reply to jupiters02 from comment #65)

(In reply to Bob Owen (:bobowen) from comment #56)

OK that build has finished, if you wouldn't mind another test:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/CS1COLXFQOupP1mdRVjGMw/runs/0/artifacts/public/build/install/sea/target.installer.exe
Zipped version:
https://firefox-ci-tc.services.mozilla.com/api/queue/v1/task/CS1COLXFQOupP1mdRVjGMw/runs/0/artifacts/public/build/target.zip

I can confirm. That nightly build works fine, with win32k sandbox setting in default/true.
I am also getting a lot of questions, so would very much like to know if this is going to be backported to Beta/Dev and Nightly builds.

Thanks, I'll duplicate your bug over to this one.
It has already landed in Nightly and is in the latest version.
I think there is a fair chance that it will get into Fx102.0b5, which goes to build this evening at 21:00 UTC and should be available tomorrow.
If not I think I can definitely say it will be in Fx102.0b6, which goes to build on 9 June 22 evening (UTC).

Comment on attachment 9279880 [details]
Bug 1769845 p3: Remove release assertion in IsWin32kLockedDown. r=handyman!

Approved for 102 beta 5, thanks!

Attachment #9279880 - Flags: approval-mozilla-beta? → approval-mozilla-beta+

Feel free to nominate this for release approval also. And FWIW, I'm expecting to ship a 101.0.1 bugfix release this week.

Flags: needinfo?(bobowencode)

Comment on attachment 9279880 [details]
Bug 1769845 p3: Remove release assertion in IsWin32kLockedDown. r=handyman!

Beta/Release Uplift Approval Request

  • User impact if declined: Some users will continue to have a none functioning browser.
  • Is this code covered by automated tests?: No
  • Has the fix been verified in Nightly?: Yes
  • Needs manual test from QE?: No
  • If yes, steps to reproduce:
  • List of other uplifts needed: None
  • Risk to taking this patch: Low
  • Why is the change risky/not risky? (and alternatives if risky): Simple removal of a currently invalid release assertion.
    (Note: only patch p3 is required. Patches p1 and p2 are not required to fix the issue and are more complicated so would increase the uplift risk.)
  • String changes made/needed: None
  • Is Android affected?: No
Flags: needinfo?(bobowencode)
Attachment #9279880 - Flags: approval-mozilla-release?
QA Whiteboard: [qa-triaged]
Flags: qe-verify+

Comment on attachment 9279880 [details]
Bug 1769845 p3: Remove release assertion in IsWin32kLockedDown. r=handyman!

Approved for 101.0.1.

Attachment #9279880 - Flags: approval-mozilla-release? → approval-mozilla-release+

Added to the 101.0.1 relnotes:

Fixed a compatibility issue causing severely impaired functionality with win32k lockdown enabled on some Windows systems

We were able to reproduce this bug on Win 10 x64, only by running an affected Nightly build (20220510095538) in compatibility mode for Win 7. It seems that on our end, the bug didn't reproduce while testing on physical machines nor on Win 10 x64 installed in a VM.

The issue is not reproducing anymore on Win 7 compatibly mode, using the latest fixed builds: Nightly 103.0a1, Beta 102.0b5 and Dot Release 101.0.1.

Hi, nemo! Could you please help us verifying if this bug is fixed on your side as well, given the fact that we did not hit the bug under the same circumstances?

Flags: needinfo?(bugs)

Verified latest nightly is back to working again. I'm guessing I'll have to wait until release is official to check 101.01 and I don't have beta installed right now.
So hopefully that's good enough.

Flags: needinfo?(bugs)

The 101.0.1 release is live now (updates are throttled, though, so you may need to manually check for updates to get it). We'd love feedback :)

Flags: needinfo?(bugs)

jupiters02: Hi, so this did make it into Firefox Beta 102.0b5 and indeed it has also made it into 101.0.1 for release.
(You might still need to manually check for updates.)
It would be great to get confirmation that this has resolved your problems.

Flags: needinfo?(jupiters02)

Oh. Sorry... 101.0.1 works fine.

Flags: needinfo?(bugs)

I'm in the process of un-freezing updates for a subset of users, but based on my (limited) testing so far everything works fine.

Nightly 103.0a1 build 20220610213450
Developer 102.0b6 build 20220609185805
Beta 102.0b6 build 20220609185805
Stable 101.0.1 build 20220608170832

Tested all the above on a number of systems and I can confirm that we are not facing this issue.
I will be sure to check back with you in case anything breaks.
Thank you, everyone, for your support.

Flags: needinfo?(jupiters02)

It's pretty clear comment #81 wasn't really related to this bug, but if it helps the upset user at all... could be a graphics acceleration problem if it only impacts RDP in that specific situation? Kinda reminds me of my firefox woes with ssh -YC - maybe try disabling webrender and gl acceleration in general..

Thanks for your feedback! We can close this as verified fixed per the above comments.

Status: RESOLVED → VERIFIED
Flags: qe-verify+
Flags: in-qa-testsuite+
Duplicate of this bug: 1772470
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: