Throughout the architecture engineering and construction lifecycle, 3D building models are extremely helpful. Such models coupled with virtual walk through can enable customers to decide and be satisfied with their dream building. Manually creating a polygonal 3D model of a set of floor plans is nontrivial and requires skill and time. Additionally, applying concise design principles makes a holistic design in order to create comfortable and cosy living environments. This project introduces and reviews a mechanism for applying design constructs after the conversion of 2D drawings into 3D Building Information Model. This research utilizes and demonstrates an automated 3D model reconstruction of real world object from an un-calibrated image sequence targeting the same scene obtained using a common camera; which can be used for interior and exterior design. There are many key techniques in 3D reconstruction from un-calibrated image sequences, including feature matching, fundamental matrix estimation, projective reconstruction, camera self-calibration and Euclidean reconstruction. The effectiveness of the algorithms was evaluated in the experiments with many real image sequences.
Recently, with the impact of AJAX a new way of web development techniques have been emerged. Hence, with the help of this model, single-page web application was introduced which can be updated/replaced independently. Today we have a new challenge of building a powerful single-page application using the currently emerged technologies. Gaining an understanding of navigational model and user interface structure of the source application is the first step to successfully build a single- page application. In this paper, it explores not only building powerful single-page application but also Two Dimensional (2D) drawings on images and videos. Moreover, in this research it clearly express the findings on 2D multi-points polygon drawing concepts on client side; real-time data binding in between drawing module on image , video and view pages.
It is obvious, the continuing growth of textual content rapidly increasing within the Word Wide Web (WWW). So certainly with the combination of sophisticated text processing and classification techniques it leads to produce high accurate search results. Even though a large body of research has delved into these problems; each has their theories and different approaches according to their data collection. This has been very challenging task continuously and this paper converges solutions, comprehensive comparisons that leads to different approaches. Therefore it will help to implement a robust search engine. The research proves probability text classification models classify documents robustly. But to improve the search result that involves short texts, we should certainly go through a hybrid approach including rules and statistical neural network models. As a pruning components the pre-processing and post-processing modules should adapted. And also due to the dynamic data the process pipeline should be frequently update.