Dorner, JulianFavrichon, SamuelÖğrenci, Arif Selçuk2019-06-272019-06-272016097815090061992161-43932161-4393https://hdl.handle.net/20.500.12469/502https://doi.org/10.1109/IJCNN.2016.7727591Neural networks may allow different organisations to extract knowledge from the data they collect about a similar problem domain. Moreover learning algorithms usually benefit from being able to use more training instances. But the parties owning the data are not always keen on sharing it. We propose a way to implement distributed learning to improve the performance of neural networks without sharing the actual data among different organisations. This paper deals with the alternative methods of determining the weight exchange mechanisms among nodes. The key is to implement the epochs of learning separately at each node and then to select the best weight set among the different neural networks and to publish them to each node. The results show that an increase in performance can be achieved by deploying simple methods for weight exchange.eninfo:eu-repo/semantics/closedAccessWeight Exchange in Distributed LearningConference Object30813084WOS:00039992550303810.1109/IJCNN.2016.77275912-s2.0-85007227791