General review:
HDFS is the Hadoop Distributed File System. Intuitively, you can think of it as a file system that spans multiple servers.
HBASE is a column-oriented storage. It is modeled after the Google Big Table, but if this is not what you knew, then think of it as a non-relational database that provides real-time read / write access to data. It is integrated in Hadoop.
Pig and Hive are ways to query data in the Hadoop ecosystem. The main difference is that Hive is more like SQL than Pig. The pig uses what is called Pig Latin.
Azkaban is a prison, I mean the task scheduler. So basically this is like Oozie, as you can run map / cut, pigs, beehive, bash, etc. As one task.
At the highest level, you might think that HDFS is your file system with HBASE as the data store. Pig and Hive will be your query tool from your data warehouse. Then Azkaban will be your way to plan jobs.
Stretched example:
If you are familiar with Linux ext3 or ext4 for the file system, MySQL / Postgresql / MariaDB / etc for the database, SQL for data access, and cron for job scheduling. (You can exchange ext3 / ext4 for NTFS and cron for the task scheduler on Windows)
HDFS replaces ext3 or ext4 (and is being distributed), HBASE takes on the role of the database (and is non-relational!), Pig / Hive is the way to access data, and Azkaban is the way to schedule jobs.
NOTE. . This is not a comparison of apples with apples. This is just to demonstrate that Hadoop components are an abstraction designed to give you a workflow that you are already familiar with.
I highly recommend you look into the components further, as you will have fun. There are so many interchangeable components in Hadoop (Yarn, Kafka, Oozie, Ambari, ZooKeeper, Sqoop, Spark, etc.) that you will ask this question to yourself.
EDIT: The links you posted were described in more detail in HBase and Hive / Pig, so I tried to give an intuitive picture of how they all fit together.