there are serious security issues to overcome in such an architecture depending upon the protocol and the means by which the P2P communication is acheived. Anytime you start running server code (meaning code that listens for new requests on a port and performs work based on those requests) on clients, you open up a serious security can of works not to mention isuses with NATs and firewalls, etc.
Security issues perhaps are a problem, but they are only moderately severe as long as code is kept well-managed. Issues with NATs can be overcome by relaying through the central server.
People should not confuse the client connectivity topology we choose to use with transport latency issues or with the connectivty problems inherent in the current FPT-based design. The current FTP based thing is problematic to be sure, but the connectivity problems are not inherent problems with the design. Those can be fixed by moving to a more reilable server and moving the server communication code out of proc, both of which I am working on (using in part, some code from Nums).
I'm not talking about latency issues or connectivity problems, I'm just talking about a peer-to-peer system. Moving to a more reliable server would help with thse issues, though, which is definitely appreciated.
"moving the server communication code out of proc" - I'm not too sure what you meant by this.
But we will reach scalability limits on the current architecture. Understand, there is no real server today. No server that is, other than the FTP server itself.
Which is a real server.
There is no code authored by me running on any machine except the client simulators each of us run. They cooperate through the FTP server by moving files, but in an incredibly ineffecient manner becuase they are forced to shoe horn their semantics into file movements and the semantics of the FTP protocol. Some poeple call this a shared-file or file-sharing version of cleint-server. Early corporate email systems (circa 1990) such as CC:Mail and Mcirosoft Mail used this archtiecture. It is not true client-server (like POP/SMTP). Were we to move to a true client-server model where we had special code runnnig on a server, implimenting our specific semantics, we could easily scale to several thousand sims connected through a single server. Buidling this on top of HTTP or PHP or sockets or whatever is mostly a discussion of ease of implimentation. The result is the same, a true cleint-server architecture. Scaling beyond that would require mutliple servers with connection logic or partitioning or a P2P architecuture.
... Why the lecture? I don't see what point you're trying to make.
As a followup to my last post... There are there any problems having the download script save data to disk? If there is, is it possible to embed binary data in the HTTP the PHP script returns? If that wouldn't work either, an text based organism file would probably be possible. I know for sure that would work...
Yes, you can put binary data into an HTTP stream - how do you think binary downloads work?
Which leaves uploading the file, which is where I don't know how to proceed.
Neither do I, since I'm not familiar with VB.