-
Now that all of our code is
running fast and awesome, let's
-
talk a bit more about memory and how it
affects the performance in our system.
-
Many programming languages that are
known for being close to the hardware,
-
or rather, high performance,
like C, C++,
-
and Fortran, usually programmers
to manage memory themselves.
-
Effectively programmers
are responsible for
-
allocating a block of memory and then
sometime in the future de-allocating
-
it when they're actually done using it.
-
Since you define when and how much
memory to allocate in free, the entire
-
quality of managing memory depends
on your skills and effectiveness.
-
That's a lot of responsibility.
-
And the reality programmers aren't
always the best at keeping track of
-
all those bits and pieces of memory.
-
I mean think about it,
product development is a muddy and
-
crazy process and often memory ends
up not getting freed properly.
-
These un-liberated blocks of memory, are
called memory leaks and they just sit
-
around hogging resources, that you
could use better or somewhere else.
-
To reduce this chaos, stress,
and sometimes big money losses,
-
caused by memory leaks,
managed memory languages were created.
-
The run times of these languages
track memory allocations and
-
release memory back to the system
when it's no longer being needed by
-
the application itself, all without
any intervention from the programmer.
-
This art, or rather science,
of reclaiming memory in a managed memory
-
environment is known as garbage
collection, this concept was created by
-
John McCarthy in 1959 to solve problems
in the lisp programming language.
-
The basic principles of garbage
collection are as follows, number one,
-
find data objects in a program that
cannot be accessed in the future for
-
example, any memory that is no
longer referenced by the code.
-
And number two, reclaim
the resources used by those objects.
-
Simple concept in theory, but
it gets pretty complex once you've
-
got 2 million lines of code and
four gigs worth of allocations.
-
Now think about it, garbage collection
can be really gnarly, I mean,
-
if you've got some 20,000 allocations
in your program right now.
-
Which ones aren't being needed anymore?
-
Or better yet, when should you
execute a garbage collection event
-
to free up memory that isn't used?
-
These are actually very
difficult questions, and
-
thankfully we've had about 50 years
worth of innovation to improve them,
-
which is why the garbage
collector in Android's Runtime,
-
is quite a bit more sophisticated
than McCarthy's original proposal.
-
It's been built to be as fast and
non-intrusive as possible.
-
Effectively the memory heaps in androids
runtimes are segmented into spaces,
-
based on the type of allocation and
-
how best the system can organize
allocations for future GC events.
-
As a new object is allocated,
-
these characteristics are taken into
account to best fit what spaces should
-
be placed into depending what version
of the android runtime you're using.
-
And here's the important part.
-
Each space has a set size,
-
as objects are allocated, we keep
track of the combined sizes, and,
-
as a space starts to grow, the system
will need to execute a garbage
-
collection event in an attempt to free
up memory for future allocations.
-
Now it's worth putting out that
GC events will behave differently
-
depending on what Android
runtime you're using.
-
For example, in Dalvik many GC events
are stop the world events, meaning
-
that any managed code that is running
will stop until the operation completes.
-
Which can get very problematic, when
these GCs take longer than normal or
-
there's a ton of them happening at once,
-
since it's going to significantly
eat into your frame time.
-
>> And to be clear,
-
the Android engineers have spent a lot
of time making sure that these events
-
are as fast as possible to reduce
interruptions, that being said,
-
they can still cause some application
performance problems in your code.
-
Firstly, understand that the more time
your app is spending doing GCs in
-
a given frame, the less time it's got
for the rest of the logic needed to keep
-
you under the 16 millisecond
rendering barrier.
-
So if you got a lot of GCs or some long
ones that are occurring right after
-
each other, it might put your frame
processing time over the 16 millisecond
-
rendering barrier, which will cause
visible hitching or jank for your users.
-
Secondly, understand that your code
flow may be doing the kinds of work that
-
force GCs to occur more often, or making
them last longer than normal duration.
-
For example, if you're allocating a
hoard of objects in the inner most part
-
of a loop that runs for a long time,
then you're going to be polluting your
-
memory heap with a lot of objects and
you'll end up kicking off a lot of GCs
-
quickly, due to this
additional memory pressure.
-
And even though we're in
a managed memory environment,
-
memory leaks can still happen.
-
Although they're not as easy to
create as the other languages.
-
These leaks can pollute your heat
with objects that won't get freed
-
during a GC event, effectively reducing
the amount of usable space you have and
-
forcing more GC events to be kicked off,
in a regular fashion as a result.
-
So that's the deal, I mean,
-
if you want to reduce the amount of GC
events that happen in a given frame,
-
then you need to focus on
optimizing your apps memory usage.
-
From a code perspective, it might be
difficult to track down where problems
-
like these are coming from, but
-
thankfully, the Android SDK has a set
of powerful tools at your disposal.
-
Let's take a look.