Скачать книгу

is hugely low bandwidth at best. Talking is a terribly inefficient way to transmit large amounts of information.

      I think within red teams it’s a huge hindrance that must necessarily be overcome. The work that we do is incredibly complex and terribly high in specificity. Especially when you bring in the issues that our perspective often adds to our tasks. We don’t usually have a control console flashing lights and outputting debug information to tell us what a system is doing internally as we interrogate it. Instead, we have to work out what’s happening inside by tracking huge numbers of variables indicated by external responses (such as error messages). Communicating exactly what is happening (and when) to each other can be a huge challenge.

      When it comes to working with the defenders/blue team/product engineers, I’m a huge fan of the collaborative engagement model in which the “find a way in and tell us a cool story” method of black-box testing is replaced by the gray-box work with the engineering team testing discrete sections of the application/network to understand the threats posed to exactly that portion of the system independent of any other protective layers. We don’t want to find one “cool” way in. We want to find and patch all the many different ways in.

      A lot of red teams are absolutely trying to prove something and come home with cool stories to impress their friends. They want to live the life of a hacker, and sure, it’s cool to find XSS in a site and pivot that into full control over a site. You know what’s even cooler? Actually helping the security team maximize the security of the application. You do this by forgetting about the glory and working directly with the security team to remove the “fog of war” around the application. Then you apply concrete adversarial-security engineering skills to building a model for a threat against any system/subsystem therein.

       What is your approach to debriefing and supporting blue teams after an operation is completed?

      Full transparency. My skill set isn’t “running Metasploit,” and my access doesn’t rely on a bucket of zero days I pull from to impress people. We’re kidding ourselves if we’re not engineers first and foremost. Engineering is a skill that I hold inside my mind, and I don’t lose anything by showing people what I’ve done. The next challenge will be different, the next project will be different, and I’ll engineer my way through them as well.

      You should be able to answer any and all questions asked of you by the product team. There isn’t any special sauce here, and any red teamer who tells you there is is simply holding onto whatever weak power they might have for fear of losing it. If questioned, I consider it my job to tell the product team literally everything I know. If they can hold all that knowledge, then I as a red cell professional need to move on and discover more. There is no secret sauce.

      We really do need to be the virtual explorers and cartographers of our field, helping everyone to follow in our footsteps as quickly as possible and then moving on to new horizons when they arrive.

       If you were to switch to the blue team, what would be your first step to better defend against attacks?

      With Active Defense, you’re looking at interdicting attack methodologies as opposed to simply trying to “harden” everything. If your plan is to remove every last vulnerability from your network, you’re going have a bad time. But if you implement systems that can anticipate adversarial actions and counter them to gain you something, you might just get positive play out of it.

      I was the lead developer on Black Hills Info Security’s Active Defense Harbinger Distribution (ADHD) for a few years. This tool was designed to tackle this exact problem space. We want to find ways to anticipate the types of actions that an adversary will take and then do things to hamper them—or sometimes, not even just hamper. If you look at tooling like Honeybadger (inside ADHD), you’ll find that it’s actually possible to track an adversary down to their physical location when they try to hack you.

      There is a lot that can be done, and it’s not even hard to do. Most teams just haven’t thought to try. But you can, and you should.

      This is a special question that’s dear to my heart. If you’re reading this and you’re part of a blue team that wants to do this type of thing but have no idea where to go, please do reach out to me directly on Twitter, and I’ll do what I can to point you to resources that can get you on the right track.

       What is some practical advice on writing a good report?

      Be detailed, correct, and honest. It sounds crazy simple, but in my experience, these are things people struggle with.

      You need your reports to contain the necessary detail that any findings you produce should be easily understood and, in all possible situations, easily reproduced. You can check this by reading over it once or having someone outside of your project review it for you. If your writing makes any leaps from one idea to the next, you need to fill in exactly what you meant.

      You should be correct in all your report writing. What this means is that you should ensure you tell the full honest and clear truth. Don’t insinuate things that aren’t there in order to make your team look better, and don’t be overconfident. I like to write using words like can, might, and may unless I’m absolutely sure about something. So much of what we do is chaotic (highly complex) and filled with massive gray areas. It’s unlikely that you will ever be able to say much with 100 percent certainty. Don’t write like you know it all.

      Finally, be honest in your report writing. Be brave even. Be willing to say that you found something truly damaging. Remember, your job is to play the part of an adversary. There aren’t too many highly destructive adversaries in the world for one simple reason. People aren’t often willing to risk it all. But rest assured, when a true villain strikes an organization, they won’t pull any punches. You need to be brave, abnormally brave, when it comes to trying things that are hard. Be willing to run attacks that may fail, and be willing to be embarrassed if they do. Be willing to write about these attacks, and admit when you don’t know things. Your clients aren’t just paying you; they’re relying on you. Act like it.

       How do you recommend security improvements other than pointing out where it’s insufficient?

      You need to have at least a cursory understanding of what’s actually happening in three areas if you want to be a good red team member.

       You need to know what attacking is like: This is the first one that most aspiring red cell members rush to learn. They want to know “how to hack,” so they rush out and start reading books and watching tutorials or talks looking to understand what the attack looks like/how it’s carried out. This is great. You absolutely must know these things. But this isn’t everything.

       You need to know what your attack does: You need to understand what the attack is actually doing. Not just “Oh, the SQL injection string is injecting SQL.” I’ve met people who knew ‘ or 1=1;– who didn’t know a drop of actual Structured Query Language. These are not serious people.

       You need to know what your target system does when you’re not around: Plain and simple, you need to actually understand what it is you’re attacking. Perhaps not intensely or in depth, but you should have at least a cursory understanding of everything—every computer, every system, every person you interact with—outside of the context of your run.

      If you know these three things, you can do your job, you can do it well, and you can provide the context modifications of your actions to the security team with professionalism and ease.

       What nontechnical skills or attitudes do you

Скачать книгу