Active Directory Access Security

Planning and designing access model for Active Directory is crucial for any Windows-based environment. Nowadays, Many attacks are successful because of the poor planning of how privileged accesses are been used in the company. Specially, when we talk about credential thefts attacks like Pass-the-Hash (PtH) and Pass-the-Ticket (PtT). Unfortunately, companies are focusing in implementing irrelevant or less-important security controls, but forget to implement some critical principles that protect attacker-attractive accounts of the company.

In this post, I will highlight some principals that should not be overlooked in the company before implementing any security controls related to access management. After the principles, will see some best practices built upon those principles. But before that, let’s define some terms:

  • Attacker-Attractive Accounts are accounts that attackers are trying to have control over because of of their privileges (like: Domain Admins, Enterprise Admins, Server Admin, IT Help Desk) or their access to critical information (like: Executive Directors (VIP), HR Personnel, Intellectual Property Personnel….etc).
  • Deep Privileged Accounts are those accounts that have privileged/administrative access to specific system(s). Example is an IIS Admin that has administrative access on IIS server.
  • Board Privileged Accounts are those accounts that have specific privilege to carry specific activity across large number of systems. Example is an IT support engineer that has reset password privilege for all endpoints.
  • Build and Emergency Accounts are accounts used to build-up the system and no longer needed after the system is ready only in case of emergency. Example is ‘Administrator’ account which was used to install the system and now it is disabled but only in safe-mode it is enabled.

Active Directory Security Principles

I have listed the following principles based on risk-driven principles and best practice defined by Microsoft:

  1. Try to eliminate the direct access to the accounts that have both board and deep privilege as this will reduce the attack surface and the impact of credential theft attacks. With this principle, will try to segregate the roles, implement workflow process and define administration tiers.
  2. Eliminate the use of privileged accounts in non-secure or high-risk environment. This principle mandates the use of secure administrative workstations where are different than normal everyday use workstations. This means, admin will have two workstations, one for administration tasks and another for email, Internet and other use.
  3.  [Strictly] Eliminate the use of privileged accounts for non-administrative tasks. In other words, create two accounts for each administrator, one standard account for normal use and another privileged accounts for administrative tasks only.
  4. Multi-factor authentication for critical privileges like DC Built-in Administrator.
  5. Clean Source Principle which requires all security dependencies to be as trustworthy as the object (privileged credentials) being secured. So, all subjects (processes, systems…etc) that are mandatory to access privileged credentials must be as critical and secure as those credentials.
  6. Emergency accounts should be disabled always and only enabled for recovery.
  7. Auditing and monitoring for high-impact Administrators with keeping the clean source principle in mind.

Now, let’s show some best practices which have previous principles built-in.

1. AD Administrative Tier Model

Microsoft defines the tier model to create buffer zones among AD Administration, Server Administration and Workstation Administration.

Diagram showing the three layers of the Tier model
Credit: Microsoft

Most of the time, the attacks come from the workstations in Tier 2. Thus, administrator accounts in Tier 2 must isolated and secured to minimize the impact. In Tier 1, we have the administrators for enterprise servers and applications. Lastly, Tier 0 contains the AD admins.

Administration within the same tier is allowed, but blocked from lower to upper tier. However, administration is allowed, after workflow process (e.g. PAM), from upper to lower tier.

Diagram of Control restrictions
Credit: Microsoft

 

2. Red Forest or Enhanced Security Administrative Environment (ESAE)

ESAE is an initiative (and professional service) from Microsoft with the core idea is to keep AD administration accounts in separate forest (Admin/Red Forest) with enabling Just-In-Time, Just-Enough-Administration and PIM-trust between the red and production forests.

Figure showing an ESAE forest used for administration of Tier 0 Assets and a PRIV forest configured for use with Microsoft Identity Manager's Privileged Access Management capability
Credit: Microsoft

 

3. Privileged Access Workstations (PAW)

PAW or Secure Admin Workstation (SAW) is a secure workstation used for administration only. Microsoft defines many technical controls to secure PAW and here I list some:

  • Always up-to-date with no delay of deploying security patches
  • Only for administration
  • No local built-in admins or powerful users
  • Logon Restriction
  • No inbound connections
  • No Internet access
  • No Firewall Override
  • Prevent Proxy Change
  • Enable RestrictedAdmin mode
  • Enable EMET
  • Enable Credential Guard
  • Multi-factor authentication: Smart Card/Virtual Smart Card
  • AntiMalware
  • Secure Boot
  • Use Protected Users, Authentication Policies, and Authentication Silos
  • AppLocker or Device Guard

References:

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/best-practices-for-securing-active-directory

https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material

https://docs.microsoft.com/en-us/windows-server/identity/securing-privileged-access/privileged-access-workstations

Best Fuzzing Tools 2017

For the purpose of building a fuzzing lab, I was searching for best fuzzing tools to be included in the lab. Almost, I went through all fuzzing tools available and decided to share my finalized list here. The lab setup and configurations will be covered in different post, InshAllah.

The following are my criteria for shortlisting a fuzzing tool:

  1. Active development
  2. Age and history of the tool
  3. Discovered vulnerabilities
  4. Categories: File Fuzzing, Network Fuzzing and Browser Fuzzing

Performance and used fuzzing algorithm are not included in my criteria list.

The tools listed here are by alphabetical order.

1. Fuddly

A general fuzzing framework by which you will able to do fuzzing on files and network protocols. Fuddly is the best when you know when and where exactly you want to fuzz the target. It uses JSON-like format to represent data. Features include capability to fuzz against situation where there are data constrains, time constrains and state constrains.

2. Honggfuzz

A powerful yet easy-to-use general fuzzer. Honggfuzz has a nice track record of discovered security bugs (including critical vulnerability in OpenSSL). We can feed a simple input and honggfuzz will start working. It is under Google repository, however it is not official Google product.

3. Peach

Peach fuzzer is commercial fuzzer, however a community edition is available here. Peach Community 3 is a cross-platform fuzzer capable of performing both dumb and smart fuzzing. It supports targets of file formats, network protocols, and APIs. With targets ranging from web browsers and network services through mobile devices and even SCADA. Peach has been in active development since 2004.

4. Radamsa

Radamsa is a test-case generator which can be fed to the target to fuzz it. Radamsa is easy-to-use with good track record of discovered security bugs. It needs only the sample input and it will start case generation. To write a full featured fuzzer from Radamsa you need scriptting (mainly Unix) skills.

Other old recognized tools: Sulley, SPIKE, UniOFuzz and Hodor.

For my testing lab, I chose PEACH and Hongfuzz. The details of the lab will be in next post InshAllah.

Secure Software Development Lifecycle

This post talks about aligning security within software development lifecycle in order to produce more robust applications with less vulnerabilities. This alignment is considered to be a strategic action as its benefit will remain for long-term.

We can achieve secure SDLC by adding a security checkpoints through the process of software delivery and this will be independent of the methodology as it is Waterfall or Agile or others.

If you would like to implement secure development lifecycle, partially or fully, in your organization I would be happy to help and share my experience.

Why Secure Development Lifecycle?

One important aspects in vulnerability management is to discover and close vulnerabilities as early as possible. In the context of applications, the earliest point to discover a vulnerability is during the development. The main objective of having a secure development lifecycle (SDL) is to eliminate application vulnerabilities or security bugs.  Another aspect is that security (reliability) is one dimension of software quality and a lot of companies mandate to have SDL for in-house developed applications, outsourced applications and off-the-shelf solutions.

Why Security in Applications?

For hackers, with or without intention, the application will be the main door to interact with the targeted organization. This makes it crucial to have an attack-proof applications. However, having a secure application alone is not enough as we need to secure all application dependencies including the network, the OS, the platform, the framework, the libraries and the browser (for web-applications).

In general, security has three pillars: Confidentiality, Integrity and Availability (CIA). However, in term of application security we add also Authentication, Authorization and Accountability (AAA). For legal requirements we go an extra step and add non-repudiation in our applications.

SDL Phases

1- Requirement Analysis Phase

In the requirements phase we  carry high-level risk assessment with the goal of identifying security requirements.

Example: The company wants to develop an internal Java based application that process employees information. As security checkpoint in this phase, we require a login page as entry page for this application and better to integrate with Active Directory to achieve SSO.

2- Design Phase

Here we focus on achieving “Secure By Design” label to our software. This can be done through Threat Modeling so that we can identify our threats for the proposed software design and implement the required controls as response. Microsoft has developed a good methodology called STRIDE and free tool is available: Microsoft Threat Modeling Tool.

Example: The same previous application, we identify the sniffing as potential threat (Information Disclosure in STRIDE). As security checkpoint in this phase, we require the LDAP integration with AD should be SASL/DIGEST-MD5 and not Simple.

3- Build Phase

In this phase, we want to achieve “Built-in Security” by writing a secure code practice, avoid vulnerable components and use builtin security controls of operating system, browser…etc.

Enablers in this phase includes:

– Approved list of libraries
– Source Code Security Analyzer (SAST)

Example: We use the controls provided by browsers like: HTTPOnly and Secure Flags for cookies, XSS Protection…etc.

4- Testing Phase

Along with the UAT, we carry security testing to make sure security requirements are implemented and to make sure the application secure against known attack vectors.

Enablers in this phase includes:

– Penetration Test
– Fault-injection Test / Fuzzing

5- Deployment Phase

Secure deployment process is implemented to make sure that the same approved build is installed in the production with the approved secure configurations.

GIAC Exploit Researcher and Advanced Penetration Tester (GXPN) – Review

I recently passed GXPN with great score (96%) and here I write my review about the course and the exam.

SANS/GIAC is the most informative and prestigious training/certification in information security industry. GXPN is the most advanced certification in Penetration Testing offered by SANS/GIAC.

My Background

I’ve almost 7 years experience in Penetration Testing and almost 75% hands-on and scattered knowledge of the course syllabus.

SANS 660 Course

SEC 660: Advanced Penetration Testing, Exploit Writing, and Ethical Hacking is the course for GXPN. The course is very informative and giving almost everything you want to start writing finding vulnerabilities and writing exploits.

The course has 6 days where:

Day 1: This day talks mainly network level attacks starting with bypassing NAC, MitM attacks, routing protocols attacks, SNMP, network manipulation and others.

Day 2: This day talks about crypto algorithms and attacks then it goes back to network booting attacks, then Powershell for penetration testers and finally attacks on restricted environment like Kios, SRP and AppLocker.

Day 3: Here things are getting more difficult. This day talks about Python, Scapy, Sulley and other fuzzing tools.

Day 4: This day talks about Linux exploitation, but it starts with introduction about memory and CPU especially in Linux.

Day 5: This day talks about Windows exploitation and anti-exploitation techniques.

Day 6: Bootcamp (CTF).

GXPN Exam

The exam is objective with about 60 questions. There are 7 lab exams where I had access to remote desktop in order to be able to figure out the answer.

The exam is open book and I had prepared two indexes for it. The first is about every tool used in the course, the usage and the page number. The other index, is the term index.

I had two practical tests before the real attempt, for the first practical test I decided to take it to measure my understanding for the course so I set immediately after the course and without the books and without preparing my index. I got 89% score which was very promising for me.

I needed about 10 days to go through the books and build my indexes. Then I set for the second practical exam with the index and the books. I got 87% this time which also gave the confidence that I am well prepared for the exam so I scheduled the exam.

In the exam, I’ve my the following with me:
– The books
– PE File format
– TCP/UDP common ports
– Metasploit Meterpreter commands

I’ve finished the exam after 2 hours and 30 minutes and got 96% score :D.

SANS Advisory Board

In the same day, I got an invitation from SANS to join their advisory board as I got high score in GXPN.

EnCase Certified Examiner (EnCE) Review

Although it is vendor-specific, EnCE is considered to be one of the top certifications in digital forensics and most covers most job postings regarding forensics.

The requirements to have this certificate:
– 64 Hours of official training Or 12 months of digital forensics experience.
– Passing exam phase I (multiple choices) and phase II (scenario and lab).

[My Path]
I’ve attended four on-demand courses from Guidance:
– Foundations in Digital Forensics with EnCase
– EnCase® Computer Forensics II
– EnCase® Computer Forensics I
– EnCe Prep Course

This path was expensive and long-term as it needed for me to be ready for the exam about 6 months.

My friend has more than one year experience in EnCase, so he applied for the exam without any course. For this case, Guidance will ask for proof of experience.

[Exam Phase I]
It is objective assessment with multiple-choice questions vary from general questions about computers, filesystem and Guidance forensic methodology. You can study from “EnCE Study Guide” which available here and I believe you will not get less than 50%. I’ve passed the exam with 97% score :D.

[Exam Phase II]
This exam is to show how you do forensic practically using EnCE. I’ve got a harddisk image with PDF. In the PDF there are the description about the case and 15 questions which you need to do investigation in the image in order to be able to answer them. The output of this exam phase is a report contains all answers with evidences. There is no score for this part, but only pass or fail.

[Certification]
After submitting the report, it took about 3 weeks to be officially certified with EnCE.

Eclipse, PHP, SVN and RSync

This post to share my configuration as I am developing a PHP application and I have the following:

  • SVN server where the code exists.
  • Staging/testing server where I should run and test my code.
  • My laptop where I write my code.

svn-developer-testing-server

In ideal scenarios the requirements are as the following:

  • Code tracking:
    • The code should be tracked and versioned in SVN.
  • Quality of the code:
    • While I am coding I have to test on the testing server.
    • I should commit only the good and working code into SVN server.
  • Fast development:
    • Time needed to build and move my code to the testing server should be minimized.

To meet my requirements, I am using the following tools on my development machine:

1- Eclipse
2- Eclipse Subversive
3- RSync

The only manual configuration needed is to configure  RSync as eclipse builder. I created a new “Program” type builder and then I used this command line:

rsync -e 'ssh -i PATH_TO/id_rsa' -avh SOURCE_DIR root@vps:TARGET_DIR

builder

If you prefer to synchronize with each save then we have the option to enable this builder to be run “During Auto Build” from Build Options tab and we enable “Build Automatically” from Project menu item.

 

W3AF Error Fix: HTTPResponse object has no attribute path

Hello,

I was using w3af for one audit and I faced the following error:

Failed to initialize the 404 detection, original exception was: “‘HTTPResponse’ object has no attribute ‘path'”.

While searching for a fix, I noticed it’s a little bit common on Kali users.
I found the fix by this patch provided from the contributors here:

https://github.com/andresriancho/w3af/commit/ea9e7bdc990e5912c0ffd89e7495c66af3bdfaab

So, open /usr/share/w3af/core/controllers/core_helpers/fingerprint_404.py
Delete line 184:
self._404_responses.append(j)
Add the following two lines instead:
four_oh_data = FourOhFourResponseFactory(j)
self._404_responses.append(four_oh_data)

Laptop Battery Drains to 0% While Powered Off

I’ve faced this issue with my new Lenovo Thinkpad E560.

Whether the laptop is in sleep mode or powered off, it loses complete charge. Most likely, this issue is caused by BIOS as it seems some components still working and consuming battery even if you power the laptop off.

 

I have two fixes for this:
– Temporary: After powering off, just unplug the battery and plug it again.
– Permanent: Update your BIOS, for me I updated to latest from Lenovo website.

Install VMware Workstation Player Pro on Kali 2016 (Rolling)

**Update**: You can find an updated list of patches here: https://aur.archlinux.org/packages/vmware-patch/

VMware Player Workstation Player is free for non-commercial use only.

Steps:
1- Download VMware Player latest version from: https://www.vmware.com/products/player/playerpro-evaluation.html

2- Make the downloaded file as executable by:

chmod +x VMware-Player-12.1.1-3770994.x86_64.bundle

3- Run the installation and continue clicking:

./VMware-Player-12.1.1-3770994.x86_64.bundle

4- When done, run VMWare from applications menu it should say some libraries are missing and it needs to install them.

Vmware Player

Click Install.

5- Finally, in case you faced issue, you can fix by patching the source-code of the library that cause problem. Mainly you can try the following patch for kernel: 4.6.0-kali1-amd64 (Latest kali at the moment of writing this article)[Credit: Here]:

1- Extract /usr/lib/vmware/modules/source/vmmon.tar
2- Modify /vmmon-only/linux/hostif.c
3- Replace “get_user_pages” to “get_user_pages_remote”
4- tar and replace original
5- Extract /usr/lib/vmware/modules/source/vmnet.tar
6- Modify ./vmnet-only/userif.c
7- Replace “get_user_pages” to “get_user_pages_remote”
8- tar and replace original
Now you should be able to compile the modules successfully.
Tested on Kali Linux 2016 with kernel 4.6.