Well, at this point, I feel like we should take some steps to understand what is going on inside a computer when we are composing.
The first step is to input MIDI data. A number of possible methods can be used to do so. The foremost in popularity is the MIDI keyboard. Also common are entering in the DAW via mouse control, wind instrument controllers, pads, and people's brainmatter being smeared on keyboards, except not the last one.
Your computer's sound card is naturally equipped with MIDI control from 15 years ago and sounds that were recorded before you were born. Yay! MIDI data is sent in to your sound card which determines what the fuk is going on and then passes on the data as needed either to the sampler component of your sound card or to the program you are working in so it can do whatever it does with it.
We all know there are two main ways to generate sound within a virtual environment.
The first is called Sampling. With sampling, we use recordings of "instruments" and play them back, possibly with effects or repitching or other things being done to them. A lone sample is not actually an instrument. It is a thumbprint of an instrument that has been constructed using live elements of its physical counterpart, but it itself is not capable of being an instrument; i.e. it is just a little recorded snippet of an instrument. However, when combined with dozens (or thousands) of other samples, and, the stinger, a set of instructions on how to manipulate that sound and have it respond to control, it becomes a sort of frankenstein semi-instrument. It's not actually a violin, but it can trick you into thinking it is one.
The second is called Synthesis. With synthesis, we use oscillators (or their simulated counterparts) to generate sounds, again, with effects or other aspects being applied. Synthesis of sound is really the same thing that a violin player or a timpani player might do with their physical counterpart... A synthesizer with its corresponding controls is completely a full instrument.
Both of these are generally built as plug-in objects, such as a VSTi, or as patch elements, such as .sf2. These plugins and their supporting elements hold instructions and materials for the creation of sound. Your hard drive(s) holds all that information, i.e. the body of the instrument.
In a sampler, when we press a note, the sound is pulled from memory, has any effects applied, then goes through the sound card, through a Digital to Analog Converter, through the Amp, and then finally to your Speakers.
In a synthesis situation, the sound is created either in the physical hardware oscillators in the box, or it is created by the processor using code. Either way, it then goes through our friend the DAC, then the amp, then speakers.
In a way, the input is the mouthpiece, MIDI is the joints that hold it together, the physical machine is the body, and the speakers are the bell.
I'll leave drawing conclusions to you, but I think it's mostly a fundamental misunderstanding of how MIDI works. Your computer is an instrument as much as a concert hall is an instrument. It's really the individual components in their particular alignment that allow the temporary functionality of an instrument behavior.
As for samplers, I do not classify them as full-fledged instruments because they do not feature full response to all playing behaviors. This is changing, but they are for the most part the same idea as splicing recorded bits of tape together to create a performance. Note that my hobby is making samplers and recording samples. I don't just make this judgement blindly. :P