Tuesday, 24 January 2023

Re: [Rpcemu] Patch to improve IOMD timer behaviour

Hi Matthew,

On Tue, 24 Jan 2023, Matthew Howkins wrote:

> Apologies for the slow response.
>
> The code looks good, but I'd like to do some further testing.

No problem.

> Do you have suggestions for programs which would benefit from these more accurate timers?

Nothing specific - I guess one of the reasons why the current
implementation has survived as long as it has is because not much code is
affected by the accuracy of the timers. But I can see that the timers are
used by:

* The Internet module for generating TCP sequence numbers (so might cause
problems in high-traffic situations?)
* ADFS for implementing many short delays (so might cause clock drift
issues during heavy disc IO?)
* And in a few other areas in the OS for generating high-resolution
timestamps (but probably not frequently enough for the current
implementation to cause serious problems)
* The gmon profiler in GCC's UnixLib

> Also, I'm aware that there's an intention to introduce an API for high-resolution timers to RISC OS,
> but I haven't been following this closely enough. Can you provide any update on the current state of
> this?

The first stage is nearing completion; I've got drivers ready for all of
the platforms ROOL offer ROM downloads for, and the kernel changes are
mostly complete. The main things holding the changes back from being
merged are the pending RISC OS 5.30 release (I think the changes are too
risky to go in to that), and finalising what the SWI names & details.

This draft MR has more info:
https://gitlab.riscosopen.org/RiscOS/Sources/Kernel/-/merge_requests/63

There's some IOMD-specific test code in
https://gitlab.riscosopen.org/jlee/HAL_IOMD/-/tree/NanoTime/test/timers.
The timeloss test is the most important one - it checks that my crazy plan
for accurately scheduling interrupts for arbitrary times will actually
work.

* On real hardware it looks like it should work well enough. The time will
drift a bit, but it should be safely within the limits of the OS's current
clock correction system. The comments at the start of the file have some
example figures from real systems.
* On current versions of RPCEmu, the timeloss test appears to fail
completely - it just sits there forever without completing (I'm sure that
an earlier version of the code did complete, producing nonsense results,
but I'm not sure if I have a copy of that version any more).
* With my RPCEmu patches applied, the code runs and produces OK-ish
results. The "delta-per-event" is a bit on the large size (I've seen
around -0.37 ticks), but that's not much worse than a StrongARM with the
cache off, so it's probably fine. However the "Max time in critical
section" value can be a fair bit larger than on real machines (I've seen
it report up to 155 ticks), which may cause some problems, so I'll have to
do some more investigation around that. I'm not expecting this to be
something that could be fixed in RPCEmu (RPCEmu has no control over the
host OS's thread scheduling), so worst-case scenario might be that the
IOMD version of RISC OS 5 only offers a subset of the functionality that
the other platforms will offer (e.g. OS_ReadMonotonicTime64 available, but
the nanosecond-resolution version of OS_CallAt/OS_CallEvery unavailable)

Cheers,

- Jeffrey


_______________________________________________
RPCEmu mailing list
RPCEmu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

No comments:

Post a Comment