[et_pb_section bb_built=”1″][et_pb_row][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.13.1″]

Web scraping is a technique to collect information automatically from the Web, through software programs.

Regularly, these programs simulate the navigation of a human on the internet, either by using the HTTP protocol manually, or by embedding a browser in an application.

It is characterized by its ability to extract public information, this type of technology is very advantageous for research issues, whether by the government, police or private detectives. It is an area with active developments, sharing a common purpose with the vision of the Semantic Web. It uses practical solutions based on existing technologies that are commonly ad hoc. There are different levels of automation that existing Web Scraping technologies can provide:

Recognition of semantic information, HTTP protocol, “Copy and paste” human, data mining algorithms.

There are many applications available that can be used to customize Web Scraping solutions. These applications could automatically recognize the structure of a certain page or provide an interface to the user where he could select the fields that are of interest within the document.

The web scraping could go against the legal terms of use of some websites. Compliance with these terms is not completely clear. While the duplication of original expressions can be in many cases illegal.

These and other innovations are now possible in Pharmamedic.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]