These classes are non-trivial and are definitely going to be changed in
the future, so we default these to prevent issues with forward
declarations, and to keep the compiler from inlining tear-down code.
The constructor alone is pretty large, the reading code should be split
into its consistuent parts to make it easier to understand it without
having to build a mental model of a 300+ line function.
The only reason the getter existed was to check whether or not the
program NCA was null. Instead, we can just provide a function to query
for the existence of it, instead of exposing it entirely.
The data retrieved in these cases are ultimately chiefly owned by either
the RegisteredCache instance itself, or the filesystem factories. Both
these should live throughout the use of their contained data. If they
don't, it should be considered an interface/design issue, and using
shared_ptr instances here would mask that, as the data would always be
prolonged after the main owner's lifetime ended.
This makes the lifetime of the data explicit and makes it harder to
accidentally create cyclic references. It also makes the interface
slightly more flexible than the previous API, as a shared_ptr can be
created from a unique_ptr, but not the other way around, so this allows
for that use-case if it ever becomes necessary in some form.
Control Code 0xf means to unconditionally execute the instruction. This
value is passed to most BRA, EXIT and SYNC instructions (among others)
but this may not always be the case.
There's no need for shared ownership here, as the only owning class
instance of those Cpu instances is the System class itself. We can also
make the thread_to_cpu map use regular pointers instead of shared_ptrs,
given that the Cpu instances will always outlive the cases where they're
used with that map.
Like the barrier, this is owned entirely by the System and will always
outlive the encompassing state, so shared ownership semantics aren't
necessary here.
This will always outlive the Cpu instances, since it's destroyed after
we destroy the Cpu instances on shutdown, so there's no need for shared
ownership semantics here.
This function doesn't need to care about ownership semantics, so we can
just pass it a reference to the file itself, rather than a
std::shared_ptr alias.
So, one thing that's puzzled me is why the kernel seemed to *not* use
the direct code address ranges in some cases for some service functions.
For example, in svcMapMemory, the full address space width is compared
against for validity, but for svcMapSharedMemory, it compares against
0xFFE00000, 0xFF8000000, and 0x7FF8000000 as upper bounds, and uses
either 0x200000 or 0x8000000 as the lower-bounds as the beginning of the
compared range. Coincidentally, these exact same values are also used in
svcGetInfo, and also when initializing the user address space, so this
is actually retrieving the ASLR extents, not the extents of the address
space in general.
This should help diagnose crashes easier and prevent many users thinking that a game is still running when in fact it's just an audio thread still running(this is typically not killed when svcBreak is hit since the game expects us to do this)
A fairly basic service function, which only appears to currently support
retrieving the process state. This also alters the ProcessStatus enum to
contain all of the values that a kernel process seems to be able of
reporting with regards to state.
Neither of these functions alter the ownership of the provided pointer,
so we can simply make the parameters a reference rather than a direct
shared pointer alias. This way we also disallow passing incorrect memory values like
nullptr.
We can utilize QStringList's join() function to perform all of the
appending in a single function call.
While we're at it, make the extension list a single translatable string
and add a disambiguation comment to explain to translators what %1
actually is.
Depending on whether or not USE_DISCORD_PRESENCE is defined, the "state"
parameter can be used or unused. If USE_DISCORD_PRESENCE is not defined,
the parameter will be considered unused, which can lead to compiler
warnings. So, we can explicitly mark it with [[maybe_unused]] to inform
the compiler that this is intentional.
We can just reserve the memory then perform successive insertions
instead of needing to use memcpy. This also avoids the need to zero out
the output vector's memory before performing the insertions.
We can also std::move the output std::vector into the destination so
that we don't need to make a completely new copy of the vector, getting
rid of an unnecessary allocation.
Additionally, we can use iterators to determine the beginning and end
ranges of the std::vector instances that comprise the output vector, as
the end of one range just becomes the beginning for the next successive
range, and since std::vector's iterator constructor copies data within
the range [begin, end), this is more straightforward and gets rid of the
need to have an offset variable that keeps getting incremented to
determine where to do the next std::memcpy.
Given it's only used in one spot and has a fairly generic name, we can
just specify it directly in the function call. This also the benefit of
automatically moving it.
Instead, we can make it part of the type and make named variables for
them, so they only require one definition (and if they ever change for
whatever reason, they only need to be changed in one spot).
Given the VirtualFile instance isn't stored into the class as a data
member, or written to, this can just be turned into a const reference,
as the constructor doesn't need to make a copy of it.
If the data is unconditionally being appended to the back of a
std::vector, we can just directly insert it there without the need to
insert all of the elements one-by-one with a std::back_inserter.