Since C++17, we no longer need to explicitly specify the type of the
mutex within the lock_guard. The type system can now deduce these with
deduction guides.
Based off RE, most of these structure members are register values, which
makes, sense given this service is used to convey fatal errors.
One member indicates the program entry point address, one is a set of
bit flags used to determine which registers to print, and one member
indicates the architecture type.
The only member that still isn't determined is the final member within
the data structure.
The kernel makes sure that the given size to unmap is always the same
size as the entire region managed by the shared memory instance,
otherwise it returns an error code signifying an invalid size.
This is similarly done for transfer memory (which we already check for).
Many of these functions are carried over from Dolphin (where they aren't
used anymore). Given these have no use (and we really shouldn't be
screwing around with OS-specific thread scheduler handling from the
emulator, these can be removed.
The function for setting the thread name is left, however, since it can
have debugging utility usages.
This was initially added to prevent problems from stubbed/not implemented NFC services, but as we never encountered such and as it's only used in a deprecated function anyway, I guess we can just remove it to prevent more clutter of the settings.
Reports the (mostly) correct size through svcGetInfo now for queries to
total used physical memory. This still doesn't correctly handle memory
allocated via svcMapPhysicalMemory, however, we don't currently handle
that case anyways.
This will make operating with the process-related SVC commands much
nicer in the future (the parameter representing the stack size in
svcStartProcess is a 64-bit value).
This isn't used at all in the OpenGL shader cache, so we can remove it's
include here, meaning one less file needs to be recompiled if any
changes ever occur within that header.
core/memory.h is also not used within this file at all, so we can remove
it as well.
We can just pass in the Maxwell3D instance instead of going through the
system class to get at it.
This also lets us simplify the interface a little bit. Since we pass in
the Maxwell3D context now, we only really need to pass the shader stage
index value in.
The pusher instance is only ever used in the constructor of the
ThreadManager for creating the thread that the ThreadManager instance
contains. Aside from that, the member is unused, so it can be removed.
These functions act in tandem similar to how a lock or mutex require a
balanced lock()/unlock() sequence.
EnterFatalSection simply increments a counter for how many times it has
been called, while LeaveFatalSection ensures that a previous call to
EnterFatalSection has occured. If a previous call has occurred (the
counter is not zero), then the counter gets decremented as one would
expect. If a previous call has not occurred (the counter is zero), then
an error code is returned.
In some cases, our callbacks were using s64 as a parameter, and in other
cases, they were using an int, which is inconsistent.
To make all callbacks consistent, we can just use an s64 as the type for
late cycles, given it gets rid of the need to cast internally.
While we're at it, also resolve some signed/unsigned conversions that
were occurring related to the callback registration.
One behavior that we weren't handling properly in our heap allocation
process was the ability for the heap to be shrunk down in size if a
larger size was previously requested.
This adds the basic behavior to do so and also gets rid of HeapFree, as
it's no longer necessary now that we have allocations and deallocations
going through the same API function.
While we're at it, fully document the behavior that this function
performs.
Makes it more obvious that this function is intending to stand in for
the actual supervisor call itself, and not acting as a general heap
allocation function.
Also the following change will merge the freeing behavior of HeapFree
into this function, so leaving it as HeapAllocate would be misleading.
In cases where HeapAllocate is called with the same size of the current
heap, we can simply do nothing and return successfully.
This avoids doing work where we otherwise don't have to. This is also
what the kernel itself does in this scenario.
Another holdover from citra that can be tossed out is the notion of the
heap needing to be allocated in different addresses. On the switch, the
base address of the heap will always be managed by the memory allocator
in the kernel, so this doesn't need to be specified in the function's
interface itself.
The heap on the switch is always allocated with read/write permissions,
so we don't need to add specifying the memory permissions as part of the
heap allocation itself either.
This also corrects the error code returned from within the function.
If the size of the heap is larger than the entire heap region, then the
kernel will report an out of memory condition.
The use of a shared_ptr is an implementation detail of the VMManager
itself when mapping memory. Because of that, we shouldn't require all
users of the CodeSet to have to allocate the shared_ptr ahead of time.
It's intended that CodeSet simply pass in the required direct data, and
that the memory manager takes care of it from that point on.
This means we just do the shared pointer allocation in a single place,
when loading modules, as opposed to in each loader.
This source file was utilizing its own version of the NSO header.
Instead of keeping this around, we can have the patch manager also use
the version of the header that we have defined in loader/nso.h
The total struct itself is 0x100 (256) bytes in size, so we should be
providing that amount of data.
Without the data, this can result in omitted data from the final loaded
NSO file.
Implements an API agnostic texture view based texture cache. Classes
defined here are intended to be inherited by the API implementation and
used in API-specific code.
This implementation exposes protected virtual functions to be called
from the implementer.
Before executing any surface copies methods (defined in API-specific code)
it tries to detect if the overlapping surface is a superset and if it
is, it creates a view. Views are references of a subset of a surface, it
can be a superset view (the same as referencing the whole texture).
Current code manages 1D, 1D array, 2D, 2D array, cube maps and cube map
arrays with layer and mipmap level views. Texture 3D slices views are
not implemented.
If the view attempt fails, the fast path is invoked with the overlapping
textures (defined in the implementer). If that one fails (returning
nullptr) it will flush and reload the texture.
Makes it more evident that one is for actual code and one is for actual
data. Mutable and static are less than ideal terms here, because
read-only data is technically not mutable, but we were mapping it with
that label.
In 93da8e0abf, the page table construct
was moved to the common library (which utilized these inclusions). Since
the move, nothing requires these headers to be included within the
memory header.
- GPU will be released on shutdown, before pages are unmapped.
- On subsequent runs, current_page_table will be not nullptr, but GPU might not be valid yet.
When #2247 was created, thread_queue_list.h was the only user of
boost-related code, however #2252 moved the page table struct into
common, which makes use of Boost.ICL, so we need to add the dependency
to the common library's link interface again.
Given this is utilized by the loaders, this allows avoiding inclusion of
the kernel process definitions where avoidable.
This also keeps the loading format for all executable data separate from
the kernel objects.
Neither the NRO or NSO loaders actually make use of the functions or
members provided by the Linker interface, so we can just remove the
inheritance altogether.
This function passes in the desired main applet and library applet
volume levels. We can then just pass those values back within the
relevant volume getter functions, allowing us to unstub those as well.
The initial values for the library and main applet volumes differ. The
main applet volume is 0.25 by default, while the library applet volume
is initialized to 1.0 by default in the services themselves.
Modifying CMAKE_* related flags directly applies those changes to every
single CMake target. This includes even the targets we have in the
externals directory.
So, if we ever increased our warning levels, or enabled particular ones,
or enabled any other compilation setting, then this would apply to
externals as well, which is often not desirable.
This makes our compilation flag setup less error prone by only applying
our settings to our targets and leaving the externals alone entirely.
This also means we don't end up clobbering any provided flags on the
command line either, allowing users to specifically use the flags they
want.
We generally shouldn't be hijacking CMAKE_CXX_FLAGS, etc as a means to
append flags to the targets, since this adds the compilation flags to
everything, including our externals, which can result in weird issues
and makes the build hierarchy fragile.
Instead, we want to just apply these compilation flags to our targets,
and let those managing external libraries to properly specify their
compilation flags.
This also results in us not getting as many warnings, as we don't raise
the warning level on every external target.
We really don't need to pull in several headers of boost related
machinery just to perform the erase-remove idiom (particularly with
C++20 around the corner, which adds universal container std::erase and
std::erase_if, which we can just use instead).
With this, we don't need to link in anything boost-related into common.
Rather than make a global accessor for this sort of thing. We can make
it a part of the thread interface itself. This allows getting rid of a
hidden global accessor in the kernel code.
This condition was checking against the nominal thread priority, whereas
the kernel itself checks against the current priority instead. We were
also assigning the nominal priority, when we should be assigning
current_priority, which takes priority inheritance into account.
This can lead to the incorrect priority being assigned to a thread.
Given we recursively update the relevant threads, we don't need to go
through the whole mutex waiter list. This matches what the kernel does
as well (only accessing the first entry within the waiting list).
* gdbstub: fix IsMemoryBreak() returning false while connected to client
As a result, the only existing codepath for a memory watchpoint hit to break into GDB (InterpeterMainLoop, GDB_BP_CHECK, ARMul_State::RecordBreak) is finally taken,
which exposes incorrect logic* in both RecordBreak and ServeBreak.
* a blank BreakpointAddress structure is passed, which sets r15 (PC) to NULL
* gdbstub: DynCom: default-initialize two members/vars used in conditionals
* gdbstub: DynCom: don't record memory watchpoint hits via RecordBreak()
For now, instead check for GDBStub::IsMemoryBreak() in InterpreterMainLoop and ServeBreak.
Fixes PC being set to a stale/unhit breakpoint address (often zero) when a memory watchpoint (rwatch, watch, awatch) is handled in ServeBreak() and generates a GDB trap.
Reasons for removing a call to RecordBreak() for memory watchpoints:
* The``breakpoint_data`` we pass is typed Execute or None. It describes the predicted next code breakpoint hit relative to PC;
* GDBStub::IsMemoryBreak() returns true if a recent Read/Write operation hit a watchpoint. It doesn't specify which in return, nor does it trace it anywhere. Thus, the only data we could give RecordBreak() is a placeholder BreakpointAddress at offset NULL and type Access. I found the idea silly, compared to simply relying on GDBStub::IsMemoryBreak().
There is currently no measure in the code that remembers the addresses (and types) of any watchpoints that were hit by an instruction, in order to send them to GDB as "extended stop information."
I'm considering an implementation for this.
* gdbstub: Change an ASSERT to DEBUG_ASSERT
I have never seen the (Reg[15] == last_bkpt.address) assert fail in practice, even after several weeks of (locally) developping various branches around GDB. Only leave it inside Debug builds.
Makes it an instantiable class like it is in the actual kernel. This
will also allow removing reliance on global accessors in a following
change, now that we can encapsulate a reference to the system instance
in the class.
Within the kernel, shared memory and transfer memory facilities exist as
completely different kernel objects. They also have different validity
checking as well. Therefore, we shouldn't be treating the two as the
same kind of memory.
They also differ in terms of their behavioral aspect as well. Shared
memory is intended for sharing memory between processes, while transfer
memory is intended to be for transferring memory to other processes.
This breaks out the handling for transfer memory into its own class and
treats it as its own kernel object. This is also important when we
consider resource limits as well. Particularly because transfer memory
is limited by the resource limit value set for it.
While we currently don't handle resource limit testing against objects
yet (but we do allow setting them), this will make implementing that
behavior much easier in the future, as we don't need to distinguish
between shared memory and transfer memory allocations in the same place.