Grasping of unknown objects in robotics is an ongoing topic in robotic research and is not uniquely understood or solved. Robot grasping is an essential feature that enables robots to solve a lot of tasks. If a robot cannot grasp, it is very limited in solving tasks. Most tasks that autonomous robots need to do, requires grasping. To find a solution for the grasp problem the neuroscience research gives suggestions. The compliance of the human hand and the observation that humans use this compliance to grasp during the closing procedure of the hand shows that the grasping problem is reduced from finding the complete hand configuration to finding a pre-shape of the hand. In addition to the pre-shape of the hand, robots need the position and orientation of that object to grasp it. This thesis depicts an approach that perceives the environment with a 3D depth sensor and extracts shape primitives out of point clouds. The point cloud contains the whole scene including the objects to grasp. In addition to the shape primitive the pose of that object is estimated. The best estimated shape primitive found by this approach is utilized by a simple heuristic to find the best pre-grasp for that object. The shape primitives are initialized with a RANSAC algorithm and tracked and evaluated over time with a particle filter. The real world experiments show that a robot that uses this approach is able to grasp a range of various objects. The results of this work shows that the grasping problem can be reduced from searching the complete hand configuration to search a pre-grasp and the corresponding pose of an object.