top of page
Writer's pictureDavid McGinnis

Spark Job Optimization Myth #4: I Need More Overhead Memory

Updated: Feb 4, 2020


A bit of nostalgia for us 90's kids. Also, the first google search hit for images of "overhead"
A bit of nostalgia for us 90's kids. Also, the first google search hit for images of "overhead"

Since we rung in the new year, we've been discussing various myths that I often see development teams run into when trying to optimize their Spark jobs. So far, we have covered:

This week, we're going to build on the discussion we had last week about the memory structure of the driver, and apply that to the driver and executor environments. In this case, we'll look at the overhead memory parameter, which is available for both driver and executors.


While I've seen this applied less commonly than other myths we've talked about, it is a dangerous myth that can easily eat away your cluster resources without any real benefit. Understanding what this value represents and when it should be set manually is important for any Spark developer hoping to do optimization.


What Is Overhead Memory?

The first question we need to answer is what overhead memory is in the first place. Overhead memory is essentially all memory which is not heap memory. This includes things such as the following:

  • Call stacks

  • Memory-mapped files

  • Shared libraries

  • Constants defined in Code

  • The code itself

Looking at this list, there isn't a lot of space needed. Files and libraries are really the only large pieces here, but otherwise, we are not talking a lot of room.


The developers of Spark agree, with a default value of 10% of your total memory size, with a minimum size of 384 MB. This means that not setting this value is often perfectly reasonable since it will still give you a result that makes sense in most cases.


Why Do People Increase It?

The most common reason I see developers increasing this value is in response to an error like the following.

This error very obviously tells you to increase memory overhead, so why shouldn't we? Because there are a lot of interconnected issues at play here that first need to be understood, as we discussed above.


While you'd expect the error to only show up when overhead memory was exhausted, I've found it happens in other cases as well. This leads me to believe it is not exclusively due to running out of off-heap memory. Because of this, we need to figure out why we are seeing this.


How Do We Solve The Error Instead?

If we see this issue pop up consistently every time, then it is very possible this is an issue with not having enough overhead memory. If this is the case, consider what is special about your job which would cause this. The defaults should work 90% of the time, but if you are using large libraries outside of the normal ones, or memory-mapping a large file, then you may need to tweak the value.


Another common scenario I see is users who have a large value for executor or driver core count. Each executor core is a separate thread and thus will have a separate call stack and copy of various other pieces of data. Consider whether you actually need that many cores, or if you can achieve the same performance with fewer cores, less executor memory, and more executors. We'll be discussing this in detail in a future post.


If you look at the types of data that are kept in overhead, we can clearly see most of them will not change on different runs of the same application with the same configuration. Based on that, if we are seeing this happen intermittently, we can safely assume the issue isn't strictly due to memory overhead.


If the error comes from an executor, we should verify that we have enough memory on the executor for the data it needs to process. It might be worth adding more partitions or increasing executor memory.


If it comes from a driver intermittently, this is a harder issue to debug. The first check should be that no data of unknown size is being collected. If so, it is possible that that data is occasionally too large, causing this issue. Collecting data from Spark is almost always a bad idea, and this is one instance of that.


Additionally, you should verify that the driver cores are set to one. Setting it to more than one only helps when you have a multi-threaded application. Since you are using the executors as your "threads", there is very rarely a need for multiple threads on the drivers, so there's very rarely a need for multiple cores for the driver.


You may also want to understand why this is happening on the driver. Looking at what code is running on the driver and the memory that is required is useful.


One thing you might want to keep in mind is that creating lots of data frames can use up your driver memory quickly without thinking of it. An example of this is below, which can easily cause your driver to run out of memory.

Keep in mind that with each call to withColumn, a new dataframe is made, which is not gotten rid of until the last action on any derived dataframe is run. That means that if len(columns) is 100, then you will have at least 100 dataframes in driver memory by the time you get to the count() call.


If none of the above did the trick, then an increase in driver memory may be necessary. This will increase the total memory* as well as the overhead memory, so in either case, you are covered. Increase the value slowly and experiment until you get a value that eliminates the failures.


When Is It Reasonable To Increase Overhead Memory?

The last few paragraphs may make it sound like overhead memory should never be increased. If that were the case, then the Spark developers would never have made it configurable, right? So let's discuss what situations it does make sense.


One common case is if you are using lots of execution cores. We'll discuss next week about when this makes sense, but if you've already made that decision, and are running into this issue, it could make sense. As discussed above, increasing executor cores increases overhead memory usage, since you need to replicate data for each thread to control. Additionally, it might mean some things need to be brought into overhead memory in order to be shared between threads. For a small number of cores, no change should be necessary. But if you have four or more executor cores, and are seeing these issues, it may be worth considering.


Another case is using large libraries or memory-mapped files. If you are using either of these, then all of that data is stored in overhead memory, so you'll need to make sure you have enough room for them.


In either case, make sure that you adjust your overall memory value as well so that you're not stealing memory from your heap to help your overhead memory. Doing this just leads to issues with your heap memory later.


Conclusion

And that's the end of our discussion on Java's overhead memory, and how it applies to Spark. Hopefully, this gives you a better grasp of what overhead memory actually is, and how to make use of it (or not) in your applications to get the best performance possible.


As always, feel free to comment or like with any more questions on this topic or other myths you'd like to see me cover in this series!


Next, we'll be covering increasing executor cores. What it does, how it works, and why you should or shouldn't do it. It's likely to be a controversial topic, so check it out!


* - A previous edition of this post incorrectly stated: "This will increase the overhead memory as well as the overhead memory, so in either case, you are covered." This is obviously wrong and has been corrected.

2,330 views2 comments
  • Twitter Social Icon
  • LinkedIn Social Icon
SIGN UP AND STAY UPDATED!

Thanks for submitting!

  • Grey Twitter Icon
  • Grey LinkedIn Icon

© 2019 by Understanding Data.  Proudly created with Wix.com

bottom of page