Saturday 29 September 2007

neuroscience - The Operation of tuning in the S1 layer of ventral model

According to my previous question in ventral Stream pathway and architecture, I want now to get a brief example about how the S1 layer is constructed. In other words, how all the simple units are tuned with the gaussian-Like tuning (for example). I am only interested to get such a cartoon example in step by step how this operation can be achieved in given of inputs (which inputs? and what we mean about these inputs?) in order to obtain the tuned simple units.



We all know that each simple unit is obtained after a tuning operation around their inputs (Subunits) in order to select an optimal output which corresponds to the preferred orientation for such a simple unit.



Moreover, I know that the S1 layer units perform a convolution on regions of the raw input image using gabor filters at different orientations and sizes. The entire population of S1 units represents a convolution map of Gabor filters of different sizes and orientations with the entire raw image (really I didn't understand this point) you can read the subsection 2.2 in this article



The image below contains the image to be recognized by the brain and some simple units obtained after the tuning operation. So what I want is to obtain a brief example which can include the details (step by step) of how the operation of tuning can be achieved.



enter image description here



enter image description here



i didn't understand well the concept. That's why i need a real example with a specific image which can contains all the steps described in my attached image (step by step) because i still don't understand what we mean about inputs x, etc.
So please if anyone can give me a real example with a specific image which can respond briefly to this attached image

Thursday 27 September 2007

human biology - Can a color-deficient person be made to visualize the missing colors?

It is a very interesting question and I did some efforts to investigate the literature on this topic, but yet I don't have a definitive answer for you. But let's start from the beginning.



First of all, the reason for color deficiency can be not only lack (rare) or impairment (more often) of certain types of color-perceiving cells (cones) in retina, but also brain injures: the central color blindness can develop after head trauma or as a result of some neurodegenerative deceases, like Parkinson's decease. In case of brain origin of the color blindness it is usually the complete color blindness (no color is percieved), whereas congenical primary color blindness (receptor-based) is usually just the unability to distinguish one or two colors, whereas the rest can be more or less separated.



I searched Pubmed for the literature on the topic and found a recent PNAS paper about the simulation of primary and secondary visual cortex on humans using intracranial electrodes. As they describe their results (bold font by me):




When percepts were elicited from late areas, subjects reported that
they were simple shapes and colors....




But the paper investigated only healthy humans, no color impaired subjects were used for the tests, so cannot conclude from here whether we can elicit the perception of the color the person is incapable to see with the eyes using these stimulations.



I took this paper as the starting point and did some reference research, looking for the paper referenced there and for newer publications referencing this one: PNAS is one of the top journals in this area with very high impact factor and if there were a publication about brain stimulation and color blindness I would have definitely identified it.



During my investigations I came accross a series of interesting articles devoted to "cortical visual neuroprosthesis for the blind" (read this paper<1> from 2005 for review on this topic), but this is the treatment of conventional blindness, not the color blindness. There was no intersection in keywords or titles for color blindness and brain stimulation, both in the referenced articles and in the complete article database.



So, I would suggest that you address some talented experimentalist with your question and maybe one day, who knows, we will read your name under the Nature article dedicated to the novel way to cure color blindness.




<1> -- unfortunately not available publicly for free, I am sorry.

Tuesday 25 September 2007

gene expression - What is the best way to express two proteins in a mammalian cell?

You can use a bidirectional promoter. The problem that you mentioned about proteins not expressed in same level happens because of competition for polymerase. But there are well optimized parts and also commercially available vectors that work fine.



You can clone genes in a serial order. It won't be a problem. Just leave a 100bp linker after the polyA signal of previous gene. If your cassette becomes too huge, then insertion becomes slightly difficult. That's why people use IRES or TA-peptides, etc.



Retroviral based insertions work quite decently, but you can't control copy number variation between different cells.



You can use two different promoters. The dynamics will depend on the promoter strength and concentration of inducer.

Friday 21 September 2007

philosophy of science - How to define "evolution"?

Theoretical biology spans multiple disciplines and the unaccompanied term evolution is defined differently in each:



As such chemical evolution is different from time evolution in physics and many other systems in theoretical biology that "evolve". These do play a role in theoretical biology.



Also, what standard answer to evolution in the field of theoretical biology are you referring to?



I recall learning that an allele as a special[:species:] gene variant, is defined as such as being present in at least one percent of the population. That definition has broadened meanwhile. A definition that has broadened.



Point in case, I don't believe there to be an evolutionary master-framework to (yet) exist, along the lines of desires for a Theory of Everything in physics.



On the other hand, as soon as you cross the boundaries of biological evolution, like let's say in the most primitive of living, biological entities, you are bound to cross over to cultural evolution as well. ( I am no expert on the subject, there are probably other intermittent steps as well). Synergistic effects in the process of evolution may even be considered in Quorum Sensing's most favorite model: Vibrio fischeri.



A simple search on the subject instantly yielded me:




"Evolution of alkaline phosphatase in marine species of Vibrio". J Bacteriol....




In other words: molecular evolution, as being part of the many evolutiony research focuses.



The idea and gross effects of Darwinian evolution are often rather straightforward at first glance. All the little details, that have to be accounted for -with scientific rigor-, as science itself is evolving, are probably where the team-work starts, as do the discussion.



So the only fixture you can count on is team in science.



The introduction to the evolutionary topic at hand, that you would use in an abstract or introduction would depend on the scientific sub-discipline, and would likely already be readily available through peer-publication.

bioinformatics - What is the difference between local and global sequence alignments?

Global alignment is when you take the entirety of both sequences into consideration when finding alignments, whereas in local you may only take a small portion into account. This sounds confusing so here an example:



Lets say you have a large reference, maybe 2000 bp. And you have a sequence, which is about 100 bp. Lets say that the reference contains the sequence almost exactly. If you did a local alignment, you would have a very good match. But if you did a global alignment, it may not match. Instead, it may look for matches throughout the entire reference, so youd end up with an alignment with many large gaps. It does not matter that it matches near perfectly at one particular region on the reference, because its looking for matches globally (ie throughout the reference).



If you have a really good match it may not matter what type of alignment you use. But when you have mismatches and such it starts to get important. This is because of the scoring algorithms used. In the example above lets say that there is a 100bp region in the reference that matches your 100bp sequence with 85% accuracy. In local alignment its very likely it will align there. Now lets say that the first 30 bp of your sequence matches a region in the beginning of the reference 95%, and the next 30bp matches a region in the middle of the reference 85%, and the final 40bp matches a region at the end of the reference about 90%. In global alignment the best match is the gapped alignment, whereas in local alignment the ungapped alignment would be best. I think in general gap penalties are less in global alignments, but Im not really an expert on the scoring algorithms.



What you want to use depends on what you are doing. If you think your sequence is a subsequence of the reference, do a local alignment. But if you think your entire sequence should match your entire reference, you would do a global alignment.

botany - How do trees lift water higher than 10 meters?

The atmosphere pressure is 10 meters of water (approx). This means that it is impossible to lift water higher than 10 meters with vacuum or сapillary action (on Earth, under normal conditions).



There are trees higher than 10 meters.



How do they lift water to their tops?



UPDATE



In other words: how cohesion-tension theory can be true if it apparently contradicts the laws of physics?



UPDATE 2



Atmospheric pressure helps to rise the water, not resists rising. What is resisting is water weight. When water column is 10 meters high, then atmospheric pressure can't help anymore.



Any adhesion/cohesion mechanism can't help here too, because it acts only in thin molecular layer. To transfer action force further the pressure is required, which is insufficient at 10 meters.



UPDATE 3



If we had capillary small enough to rise water to 10 meters and then we will build smaller capillary which we expect will rise water higher, we will fail. Water column will break and does not climb higher than 10 meters.



enter image description here



Menisci acts like small piston and can't help rising water higher than 10 meters.



UPDATE 4



Common pressure distribution in capillary is follows:



enter image description here



$P_0$ is atmospheric pressure. As you see, right under menisci, the pressure is lowered by $2 \sigma / R$ where $R$ is the radius of menisci and $\sigma$ is surface tension. The entire term is called "Laplace pressure". As you see, it can't supersede atmospheric pressure, because water continuity will be broken in the case.



I.e. no any menisci can rise water higher than 10 meters.



The existence of higher trees PROVES that there are some other significant mechanisms, not adhesion/cohesion, not capillary.



UPDATE 5



Current version, as I understood it, is based on a declaration, that a water, if put into thin capillary, can behave like a solid body. Particularly, it can resist tension up to minus 15 atmospheres.



This is a tensile strength of concrete, so I don't believe that without additional proofs.



I think it is just not hard to make thin tube, put water into it and check, how high it can climb.



Was it done ever?

bioinformatics - Is there a program that simulates biology on a molecular level?

There is a recent paper that introduced the first molecular-level whole-cell simulation.



Karr, J.R., Sanghvi, J.C., Macklin, D.N., Gutschow, M.V., Jacobs, J.M., Bolival, B., Assad-Garcia, N., Glass, J.I., & Covert, M.W. (2012). A whole-cell computational model predicts phenotype from genotype. Cell 150:389-401 DOI: 10.1016/j.cell.2012.05.044



The authors combined 28 different sub-modules of various biological processes from the literature. Each one operates at the level of macromolecules, although models them in different ways: some by ODEs, some by logic; some by agent-based approaches. If you want to fool around their code is available online and I wrote a basic introduction/summary of the paper.



Here are some related bio.SE questions motivated by that model:

Thursday 20 September 2007

botany - Plant anatomy, what are these stem like filaments growing under the flower

These are "floral spurs" – they usually contain nectar, and are part of the variety of complex flower shapes that orchids and other flowers, like columbines, have co-evolved with their pollinators.



Essentially, in order for the pollinator to reach the nectar down in the base of the spur, it must move into a position where pollen is deposited onto it; then when it moves to another flower, it can transfer that pollen and the plant is able to reproduce.



Of course, long tongues are good at getting nectar from long-spurred flowers, so there is both species filtering for long-tongued species, and presumably also selective pressure within a species for long-tongued individuals. This, in turn, drives selective pressure on the spurs.



Spurs are a classic example of a "key innovation": they have evolved separately in different types of flowers, and when they evolve, they often lead to rapid speciation (because a small change in spur architecture can constitute a barrier to reproduction). (See, for example, publications of Scott Hodges)

human biology - What is the biological mechanism linking temperature and probability to be infected with a virus?


It is common knowledge that when you're cold you could get a cold.




This may be a nice illustration why we need to be wary of “common knowledge”.




What is the mechanism linking temperature and viral infection?




This isn’t clear. There are a few proposed mechanisms but a likely explanation is: “there is no mechanism” – and the assumed correlation between cold temperature and catching a cold might be nothing more than an illusion – a form of confirmation bias.



In fact, we don’t even know for sure that the cold season coincides with cold temperatures.



On the other hand, a 2007 review [1] found that




… most of the available evidence from laboratory and clinical studies suggests that inhaled cold air, cooling of the body surface and cold stress induced by lowering the core body temperature cause pathophysiological responses such as vasoconstriction in the respiratory tract mucosa and suppression of immune responses, which are responsible for increased susceptibility to infections.
[emphasis mine]




So according to their results, potential mechanisms which link temperatures and viral infection are indirect:



  • The lowered core body temperature would imply, in my interpretation, that the body has to expend more energy to maintain its temperature, and hence less energy to power its immune system (which is expensive).

  • Lack of respiratory tract mucosa removes an important physical barrier between the environment and the body, and allows pathogens to enter the body with much less resistance.

[1] Mourtzoukou & Falagas: Exposure to Cold and respiratory tract infections, in Int J Tuberc Lung Dis. (2007), pp 938–943

Wednesday 19 September 2007

zoology - Why is the frog genome so much larger than a fish's?

As we have heard in the summaries of the human ENCODE project, 80 per cent of junk DNA appears to have an essential function. Many fish have a genome with only one tenth the size of a usual vertebrate genome. Why can fish have 1/10th of junk DNA and be still fully functional? What has a frog more than a fish has? I'm especially interested if we can see the difference somewhere, complexity of physiology or anatomy, or such.



Jap. puffer fish genome: 390 Megabases, 47,800-49,000 genes (UniProt)



Medaka genome: 690 Megabases, 24,600 genes



Clawed frog: 1,500 Megabases, 23,500 genes

Sunday 16 September 2007

biochemistry - Melting point of a fatty acid?


(1) Chain Length




Will definitely affect melting point, as this website explains pretty well:



"Melting point principle: as the molecular weight increases, the melting point increases."




(2) Number of Methylene groups.




This is another way of describing unsaturated from saturated fats. The more saturated a fat is, the straighter it is. Methylene groups cause kinks, which disrupts the Van der Waals forces along the rest of the carbon chain.



As such, from the link above again:



"On the other hand, the introduction of one or more double bonds in the hydrocarbon chain in unsaturated fatty acids results in one or more "bends" in the molecule. The geometry of the double bond is almost always a cis configuration in natural fatty acids. These molecules do not "stack" very well. The intermolecular interactions are much weaker than saturated molecules. As a result, the melting points are much lower for unsaturated fatty acids."




(3) Ionized state of the fatty acid.




This will have a very minor affect. The fatty acid generally has an unpaired ester (-ate at the end) which can have a negative charge. However, from the link above again:



"However, in fatty acids, the non-polar hydrocarbon chain gives the molecule a non- polar character."



So even if the ester had a charge, the negative character is miniscule compared to the intermolecular forces exerted by the non-polar tail. Particularly since the charge can distribute amongst the two oxygen molecules which are conjugated, reducing the reactivity further.




(4) Degree of saponification.




I'm not super-familiar with the degree of saponification, but from a quick overview of the process I'd say this wouldn't affect melting point - counter to my comment. The process of making soap involves only the acidic portion of the fatty-acid triglycerides. That portion of the macromolecule is going to be pretty much the same regardless of the fatty acid, so it will have nearly the same reactivity regardless of its chain length and conjugation.



What will not saponify are usually waxes - which are pretty much fully-saturated hydrocarbon chains with very few (if any) acidic sites. Parafin Wax, for instance, does not saponify, and has the formula ${C_{31}H_{64}}$.




(5) Ability to alter entropy of water.




Like the degree of saponification option above, the ability of a fatty acid to alter the entropy of water correlates to the number of reactive sites throughout the molecule. As-such, unsaturated fats (those with methylene groups) are going to be slightly more reactive as pi-bonds are more reactive than sigma bonds.



So, given that this answer relies on a previous option, it's probably better to go with the previous option.



Ultimately, if I were answering the question I'd choose 1, 2 since the rest either depend on those two or are miniscule. However, beware that it's your question to answer and not mine.

Friday 14 September 2007

human biology - Does making yogurt from non-pasteurized milk work against possible disease bacteria?

In short, 'No.'



Yogurt, in and of itself, is the product of milk with specific strains of bacteria that are not particularly unique. Yogurt is just as hospitable to harmful bacteria as beneficial bacteria.



The two mechanisms which spring to my mind that would prevent infection by harmful bacteria in yogurt would be the following:



*The already dominant beneficial bacteria outcompete for the harmful bacteria, effectively limiting the capacity for harmful bacterial growth.



*The already present beneficial bacteria create extracellular products which damage harmful bacteria.



The second, if it happens at all, doesn't happen on a scale that I'm aware of. The first could happen, but I highly doubt it. It seems, to me, that it would be more likely to happen in cheese when most of the easy resources have been consumed, which in yogurt they haven't.



Yogurt, and all products stemming from milk, are inherently safer not because of any bacteria that take up residence, but because the mammary glands inside the animal producing the milk are effective filters for a variety of infections. Milk is a product that is constructed in the mammary glands, and the cells are selective about the output. Keep in mind, however, that it is by no means sterile. There are dozens of virii and diseases which can be imparted by breastmilk, and nursing women must adhere to guidelines concerning exposure to medications and diet. It's also possible for DDT and other compounds to become concentrated in breastmilk, resulting in harm to the child.



http://www.breastfeedingbasics.org/cgi-bin/deliver.cgi/content/Drugs/pre_pass.html



So, yes, while milk in and of itself is safer than other options, it is not risk free. A virus could easily infect the mammal or herd producing the milk for sale to humans, and only exhibit dangerous symptoms after the milk had been sold. This is why it's illegal to sell unpasteurized milk for human consumption in most U.S. states.

Thursday 13 September 2007

zoology - What is this crow eating, and is it a common part of the corvid diet?

Here's a picture (by Rob Curtis) of a crow carrying and eating the corpse of what looks a bit like a small hawk or falcon:



Crow carrying dead bird



Other pictures clearly show the crow is eating the dead bird. This image shows the underside of the head and beak; this one shows its legs, which are grayish.



  1. What bird is being eaten?

  2. Is this bird a usual part of the corvid diet? Or did the crow just opportunistically scavenge a dead bird?

plant physiology - Measuring sugar content of a tree

The difficult step will be getting samples of sap: have a look at the WP page for maple syrup for ideas about methods of tapping into the xylem of your trees.



You will then need to assay sucrose in the sample of sap. There are many commercial assay kits available (Google: sucrose assay), which rely on an enzyme, invertase, to convert the sucrose to glucose + fructose. The released glucose is then measured by a glucose oxidase assay. You would need some kind of colorimeter/spectrophotometer for quantitative results but there is a visible color change, so you could probably get a rough idea of what is going on by visual comparison with a set of glucose standards.



Supplementary
An alternative would be to measure sugar concentration by refractometry: see here and here.

Wednesday 12 September 2007

human biology - Extremely rare occurence of Heart cancer?

This is a specific version of the great cancer question: "Why are some cancers more common than others?" The answer is either "Some have more common causes", and (or) "Some are cured spontaneously more often". So now all you are asking is "What causes cancer?" and "How do we cure it?"



Given that, I don't expect a general definitive answer will be forthcoming. A specific answer might be possible, but I doubt there will be any existing experiments that address this. With the obvious caveat that only experimental or statistical results can really answer your question, here are a couple of off-the-cuff hypothesis for consideration:



1 - Differences in stem cell populations.



Apparently, differentiation can actually be targeted as part of a treatment in some neuroblastoma cases - see the section on "Differentiation therapy" in this page from Sloan-Kettering. Cardiac stem cells seem to exist, but a difference in relative population and turnover rates between brain and heart might be related to the relative frequencies of these types of cancers. @WYSYWIG referred to neural progenitors in his comment above.



2 - A filtering effect due to a more extreme selection pressure in the uniformly stressful environment of the a beating heart.



Although there is a relation between elevated levels of oxidative stress and cancer causing mutations, it could be that this only matters in a punctuated stress environment, where cells have down time to recover. A sustained stress environment might actually helps prevent cancer. The path from normal to cancer cell requires multiple mutations, and I would not expect most pre-cancerous cells to be more fit than correctly wired ones. The extra stress of the cardiac environment might produce an elevated mutation rate, but also produce an even higher rate of apoptosis in early "sick" cells before they accumulate enough mistakes to become become cancerous, resulting in a net decrease in the rate of cancer.



Both of these ideas fit with heart cancer being less common and with metastasis from elsewhere being more common in heart than primary cancer, but unfortunately, a higher rate of clearing of sick cells would necessitate a higher rate of replacement from stem cells, so these two hypotheses partially cancel each other. Again, hypotheses without experiments are not really answers.

Saturday 8 September 2007

Difference between genetic engineering and synthetic biology

My understanding is that synthetic biology is genetic engineering 2.0. The difference is in the approach. Whereas genetic engineering projects are usually ad hoc, synthetic biology aims to apply proper engineering principles such as standardisation, modularisation, and reusability. Synthetic biologists create and use libraries of standard parts that are characterised, so they can be easily reused in projects. A part could be a gene, a terminator, a promoter, etc.



Synthetic biology also has greater ambitions. The focus is on creating whole systems/circuits of genetic regulation. This means there is a need for computational modelling and understanding of how biological systems work. In this aspect synthetic biology is a sister of systems biology a bit like synthetic chemistry (engineering) is a sister of chemistry (science).



You could of course argue that it's just a marketing ploy to invent a new name for something that is just the next step in genetic engineering, but the differences in approach are quite large and a new name signifies it.



With regards to synthesised vs. PCRed DNA: It doesn't really matter which you use in synthetic biology. However, cheap synthesis is one of the technologies that enable easier synthetic biology. The idea for the future is that you will be able to synthesise whole plasmids and chromosomes instead of having to "cut and paste" DNA. When that happens physical parts repositories will be obsolete, but they will remain crucial in silico. Cheap synthesis is nice, but doesn't make or brake synthetic biology.

Thursday 6 September 2007

anatomy - Evolution of long necks in giraffes

There seems to consensus that it is not competition for tall food. Giraffes actually often feed on resources that are lower than their maximum possible height. See:



Simmons, R. E. & Scheepers, L. 1996. Winning by a Neck: Sexual Selection in the Evolution of Giraffe. The American Naturalist 148: 771–786



This paper put forth the idea that sexual selection is the reason behind long necks. The idea is that longer necked males are dominant. But this theory has also been questioned. See:



G. Mitchell, S. J. Van Sittert, J. D. Skinner. 2009.Sexual selection is not the origin of long necks in giraffes. Journal of Zoology 278, Issue 4: 281–286



So in the end there's no clear consensus. Some papers have returned to the theory of competition for research in the past few years.To put it simply, no there is no consensus.

microbiology - Is there a practical upper limit to amount of nucleotides or genes in a transformed plasmid?

From my experience in the mammalian world (and this may apply to bacterial systems as well), it's not so much the number of genes in the plasmid as its actual size. The larger the construct is, the more difficult it will be to get it into your target cells in one piece, without degradation or shearing. Since the transformation efficiency is lower, you are getting fewer whole constructs per cell, so depending on how you've set up your promoters, the overall expression level can be significantly lower. The trouble with splitting your genes amongst two or more plasmids is that each will have different transformation efficiencies, and there will be a certain (perhaps large) number of cells that don't get the full complement of genes, interfering with phenotype analysis. And again, you'll also have to consider differential expression rates.



However, once you get your vector into the cells in one piece, theoretically they should all get expressed at approximately equal levels (assuming identical promoters). It's possible that steric hindrance among multiple transcription complexes may occur - I just don't know enough about bacterial transcription and the effects of circular plasmids to say.



One way to get around many of these factors would be to use a bacterial artificial chromosome or BAC. BACs are 7-10 kb vectors that can have inserts of up to 1 million bp cloned into them and are then electroporated into cells. One of (the many) cool things about them is they control their own duplication and partitioning at cell division, so succeeding generations should have essentially the same copy number as the original. They were heavily used during the Human Genome Project to amplify large sections of DNA for sequencing, but they are also used in many other studies, including synthetic biology. OpenWetWare has a list of some common ones, and NEB sells pBeloBAC11 systems.

Wednesday 5 September 2007

dna - How are atoms in benzopyridines and benzopurines numbered?

This is a question of chemical nomenclature and the principle source for this is the IUPAC (IUBMB in case of biological molecules; but not in this case). You can find all hetero-ring nomenclature references on the IUPAC web site:



http://www.acdlabs.com/iupac/nomenclature/



http://www.acdlabs.com/iupac/nomenclature/79/r79_702.htm



(I'm skipping the actual naming step)



Numbering is actually done according to
http://www.acdlabs.com/iupac/nomenclature/79/r79_72.htm



That means, if you take the skeleton without hetero atoms, you start




in a clockwise direction commencing with the carbon atom not engaged
in ring-fusion in the most counter-clockwise position of the uppermost
ring, or if there is a choice, of the uppermost ring farthest to the
right, and omitting atoms common to two or more rings.




Now, if there are more than one possibility due to symmetry, you chose the numbering that gives the heteros the smallest possible numbers (this means your purines would be actually wrongly numbered if they weren't an exception---see the List of retained names).



So, we have finally in dxC and dxT the start at the lower nitrogen, going anti-clockwise (the other N at 3, methyl at 6, ribose at 8), and in dxA and dxG similarly the start at the lower right N, going anti-clockwise with other Ns at 3, 6, and 8, and ribose at 8, as well.

Saturday 1 September 2007

genetics - Effect of single-gene overexpression in the cell's response

Which are the factors that modify the overall gene differential expression by introducing a vector for single-gene overexpression?



If you overexpress a gene for a protein involved in signal transduction (e.g., a kinase, scaffold, or receptor) by vector cell transfection, then you overdrive the cell using this signaling pathway, it's useful to isolate the pathway and study them.



Is there any way to modify the overall gene expression or cell differential expression pattern by gene transfection? I think this would work if you delivered a gene for overexpression in proteins involved for RNA processing (e.g., splicing, ribosomal proteins, etc.), RNA transcription (e.g., TFs) or protein translation.