Crowdsourcing uses human intelligence to solve tasks which are still difficult for machines. Tasks at existing crowdsourcing platform are batches of relatively simple microtasks. However,
real-world problems are often more difficult than micro-tasks. They require data collection, organization, pre-processing, analysis, and synthesis of results. In this thesis, we study how to specify
complex crowdsourcing tasks and realize them with the helpof existing crowdsourcing platforms. The first contribution of this thesis is a complex workflows model that provides high-level constructs to
describe a complex task through orchestration of simpler tasks. We provide algorithms to check termination and correctness of a complex workflow for a subset of the language (these questions are undecidable
in the general case). A well-known drawback of crowdsourcing is that human answers might be wrong. To leverage this problem, crowdsourcing platforms replicate tasks, and forge a final trusted answer out of
the produced results. Replication increases quality of data, but it is costly. The second contribution of this thesis is a set of aggregation techniques where merging of answers is realized using Expectation Maximization, and replication of tasks is performed online after considering the confidence
estimated for aggregated data. Experimental results show that these techniques allow to aggregate the returned answers while achieving a good trade-off between cost and data quality,
both for the realization of a batches of microtasks, and of complex workflow.